WorldWideScience

Sample records for hazard models predicting

  1. A refined QSAR model for prediction of chemical asthma hazard.

    Science.gov (United States)

    Jarvis, J; Seed, M J; Stocks, S J; Agius, R M

    2015-11-01

    A previously developed quantitative structure-activity relationship (QSAR) model has been extern ally validated as a good predictor of chemical asthma hazard (sensitivity: 79-86%, specificity: 93-99%). To develop and validate a second version of this model. Learning dataset asthmagenic chemicals with molecular weight (MW) chemicals for which no reported case(s) of occupational asthma had been identified were selected at random from UK and US occupational exposure limit tables. MW banding was used in an attempt to categorically match the control group for MW distribution of the asthmagens. About 10% of chemicals in each MW category were excluded for use as an external validation set. An independent researcher utilized a logistic regression approach to compare the molecular descriptors present in asthmagens and controls. The resulting equation generated a hazard index (HI), with a value between zero and one, as an estimate of the probability that the chemical had asthmagenic potential. The HI was determined for each compound in the external validation set. The model development sets comprised 99 chemical asthmagens and 204 controls. The external validation showed that using a cut-point HI of 0.39, 9/10 asthmagenic (sensitivity: 90%) and 23/24 non-asthmagenic (specificity: 96%) compounds were correctly predicted. The new QSAR model showed a better receiver operating characteristic plot than the original. QSAR refinement by iteration has resulted in an improved model for the prediction of chemical asthma hazard. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. An engineering model for hazard prediction of ammunition magazine doors

    NARCIS (Netherlands)

    Voort, M.M. van der; Conway, R.; Kummer, P.O.; Rakvåg, K.; Weerheijm, J.

    2015-01-01

    An accidental explosion in an ammunition magazine may break-up the structure and cause a significant debris hazard. Experimental and theoretical research mainly focusses on the break-up of the reinforced concrete or brick magazine walls. The behaviour of the door has usually been ignored in the

  3. Traffic Incident Clearance Time and Arrival Time Prediction Based on Hazard Models

    Directory of Open Access Journals (Sweden)

    Yang beibei Ji

    2014-01-01

    Full Text Available Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an effective input for travel time prediction. In this paper, the hazard based prediction models are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.

  4. Spatial prediction models for landslide hazards: review, comparison and evaluation

    Directory of Open Access Journals (Sweden)

    A. Brenning

    2005-01-01

    Full Text Available The predictive power of logistic regression, support vector machines and bootstrap-aggregated classification trees (bagging, double-bagging is compared using misclassification error rates on independent test data sets. Based on a resampling approach that takes into account spatial autocorrelation, error rates for predicting 'present' and 'future' landslides are estimated within and outside the training area. In a case study from the Ecuadorian Andes, logistic regression with stepwise backward variable selection yields lowest error rates and demonstrates the best generalization capabilities. The evaluation outside the training area reveals that tree-based methods tend to overfit the data.

  5. LNG fires: A review of experimental results, models and hazard prediction challenges

    Energy Technology Data Exchange (ETDEWEB)

    Raj, Phani K. [Technology and Management Systems, Inc., 102 Drake Road, Burlington, MA 01803 (United States)]. E-mail: tmsinc1981@verizon.net

    2007-02-20

    A number of experimental investigations of LNG fires (of sizes 35 m diameter and smaller) were undertaken, world wide, during the 1970s and 1980s to study their physical and radiative characteristics. This paper reviews the published data from several of these tests including from the largest test to date, the 35 m, Montoir tests. Also reviewed in this paper is the state of the art in modeling LNG pool and vapor fires, including thermal radiation hazard modeling. The review is limited to considering the integral and semi-empirical models (solid flame and point source); CFD models are not reviewed. Several aspects of modeling LNG fires are reviewed including, the physical characteristics, such as the (visible) fire size and shape, tilt and drag in windy conditions, smoke production, radiant thermal output, etc., and the consideration of experimental data in the models. Comparisons of model results with experimental data are indicated and current deficiencies in modeling are discussed. The requirements in the US and European regulations related to LNG fire hazard assessment are reviewed, in brief, in the light of model inaccuracies, criteria for hazards to people and structures, and the effects of mitigating circumstances. The paper identifies: (i) critical parameters for which there exist no data, (ii) uncertainties and unknowns in modeling and (iii) deficiencies and gaps in current regulatory recipes for predicting hazards.

  6. The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms

    Science.gov (United States)

    Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie

    2009-01-01

    The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with

  7. Modeling and Prediction of Wildfire Hazard in Southern California, Integration of Models with Imaging Spectrometry

    Science.gov (United States)

    Roberts, Dar A.; Church, Richard; Ustin, Susan L.; Brass, James A. (Technical Monitor)

    2001-01-01

    Large urban wildfires throughout southern California have caused billions of dollars of damage and significant loss of life over the last few decades. Rapid urban growth along the wildland interface, high fuel loads and a potential increase in the frequency of large fires due to climatic change suggest that the problem will worsen in the future. Improved fire spread prediction and reduced uncertainty in assessing fire hazard would be significant, both economically and socially. Current problems in the modeling of fire spread include the role of plant community differences, spatial heterogeneity in fuels and spatio-temporal changes in fuels. In this research, we evaluated the potential of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data for providing improved maps of wildfire fuel properties. Analysis concentrated in two areas of Southern California, the Santa Monica Mountains and Santa Barbara Front Range. Wildfire fuel information can be divided into four basic categories: fuel type, fuel load (live green and woody biomass), fuel moisture and fuel condition (live vs senesced fuels). To map fuel type, AVIRIS data were used to map vegetation species using Multiple Endmember Spectral Mixture Analysis (MESMA) and Binary Decision Trees. Green live biomass and canopy moisture were mapped using AVIRIS through analysis of the 980 nm liquid water absorption feature and compared to alternate measures of moisture and field measurements. Woody biomass was mapped using L and P band cross polarimetric data acquired in 1998 and 1999. Fuel condition was mapped using spectral mixture analysis to map green vegetation (green leaves), nonphotosynthetic vegetation (NPV; stems, wood and litter), shade and soil. Summaries describing the potential of hyperspectral and SAR data for fuel mapping are provided by Roberts et al. and Dennison et al. To utilize remotely sensed data to assess fire hazard, fuel-type maps were translated

  8. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Directory of Open Access Journals (Sweden)

    Michael King

    Full Text Available Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL for the development of hazardous drinking in safe drinkers.A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women.69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873. The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51. External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846 and Hedge's g of 0.68 (95% CI 0.57, 0.78.The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  9. Survival prediction based on compound covariate under Cox proportional hazard models.

    Directory of Open Access Journals (Sweden)

    Takeshi Emura

    Full Text Available Survival prediction from a large number of covariates is a current focus of statistical and medical research. In this paper, we study a methodology known as the compound covariate prediction performed under univariate Cox proportional hazard models. We demonstrate via simulations and real data analysis that the compound covariate method generally competes well with ridge regression and Lasso methods, both already well-studied methods for predicting survival outcomes with a large number of covariates. Furthermore, we develop a refinement of the compound covariate method by incorporating likelihood information from multivariate Cox models. The new proposal is an adaptive method that borrows information contained in both the univariate and multivariate Cox regression estimators. We show that the new proposal has a theoretical justification from a statistical large sample theory and is naturally interpreted as a shrinkage-type estimator, a popular class of estimators in statistical literature. Two datasets, the primary biliary cirrhosis of the liver data and the non-small-cell lung cancer data, are used for illustration. The proposed method is implemented in R package "compound.Cox" available in CRAN at http://cran.r-project.org/.

  10. Hidden Markov Model for quantitative prediction of snowfall and analysis of hazardous snowfall events over Indian Himalaya

    Science.gov (United States)

    Joshi, J. C.; Tankeshwar, K.; Srivastava, Sunita

    2017-04-01

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992-2012. There are six observations and six states of the model. The most probable observation and state sequence has been computed using Forward and Viterbi algorithms, respectively. Baum-Welch algorithm has been used for optimizing the model parameters. The model has been validated for two winters (2012-2013 and 2013-2014) by computing root mean square error (RMSE), accuracy measures such as percent correct (PC), critical success index (CSI) and Heidke skill score (HSS). The RMSE of the model has also been calculated using leave-one-out cross-validation method. Snowfall predicted by the model during hazardous snowfall events in different parts of the Himalaya matches well with the observed one. The HSS of the model for all the stations implies that the optimized model has better forecasting skill than random forecast for both the days. The RMSE of the optimized model has also been found smaller than the persistence forecast and standard deviation for both the days.

  11. A fast global tsunami modeling suite as a trans-oceanic tsunami hazard prediction and mitigation tool

    Science.gov (United States)

    Mohammed, F.; Li, S.; Jalali Farahani, R.; Williams, C. R.; Astill, S.; Wilson, P. S.; B, S.; Lee, R.

    2014-12-01

    The past decade has been witness to two mega-tsunami events, 2004 Indian ocean tsunami and 2011 Japan tsunami and multiple major tsunami events; 2006 Java, Kuril Islands, 2007 Solomon Islands, 2009 Samoa and 2010 Chile, to name a few. These events generated both local and far field tsunami inundations with runup ranging from a few meters to around 40 m in the coastal impact regions. With a majority of the coastal population at risk, there is need for a sophisticated outlook towards catastrophe risk estimation and a quick mitigation response. At the same time tools and information are needed to aid advanced tsunami hazard prediction. There is an increased need for insurers, reinsurers and Federal hazard management agencies to quantify coastal inundations and vulnerability of coastal habitat to tsunami inundations. A novel tool is developed to model local and far-field tsunami generation, propagation and inundation to estimate tsunami hazards. The tool is a combination of the NOAA MOST propagation database and an efficient and fast GPU (Graphical Processing Unit)-based non-linear shallow water wave model solver. The tsunamigenic seismic sources are mapped on to the NOAA unit source distribution along subduction zones in the ocean basin. Slip models are defined for tsunamigenic seismic sources through a slip distribution on the unit sources while maintaining limits of fault areas. A GPU based finite volume solver is used to simulate non-linear shallow water wave propagation, inundation and runup. Deformation on the unit sources provide initial conditions for modeling local impacts, while the wave history from propagation database provides boundary conditions for far field impacts. The modeling suite provides good agreement with basins for basin wide tsunami propagation to validate local and far field tsunami inundations.

  12. Spatial prediction of landslide hazard using discriminant analysis and GIS

    Science.gov (United States)

    Peter V. Gorsevski; Paul Gessler; Randy B. Foltz

    2000-01-01

    Environmental attributes relevant for spatial prediction of landslides triggered by rain and snowmelt events were derived from digital elevation model (DEM). Those data in conjunction with statistics and geographic information system (GIS) provided a detailed basis for spatial prediction of landslide hazard. The spatial prediction of landslide hazard in this paper is...

  13. A Course in Hazardous Chemical Spills: Use of the CAMEO Air Dispersion Model to Predict Evacuation Distances.

    Science.gov (United States)

    Kumar, Ashok; And Others

    1989-01-01

    Provides an overview of the Computer-Aided Management of Emergency Operations (CAMEO) model and its use in the classroom as a training tool in the "Hazardous Chemical Spills" course. Presents six problems illustrating classroom use of CAMEO. Lists 16 references. (YP)

  14. Identifying and modeling safety hazards

    Energy Technology Data Exchange (ETDEWEB)

    DANIELS,JESSE; BAHILL,TERRY; WERNER,PAUL W.

    2000-03-29

    The hazard model described in this paper is designed to accept data over the Internet from distributed databases. A hazard object template is used to ensure that all necessary descriptors are collected for each object. Three methods for combining the data are compared and contrasted. Three methods are used for handling the three types of interactions between the hazard objects.

  15. Predicting chemically-induced skin reactions. Part I: QSAR models of skin sensitization and their application to identify potentially hazardous compounds

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Vinicius M. [Laboratory of Molecular Modeling and Design, Faculty of Pharmacy, Federal University of Goiás, Goiânia, GO 74605-220 (Brazil); Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States); Muratov, Eugene [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States); Laboratory of Theoretical Chemistry, A.V. Bogatsky Physical-Chemical Institute NAS of Ukraine, Odessa 65080 (Ukraine); Fourches, Denis [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States); Strickland, Judy; Kleinstreuer, Nicole [ILS/Contractor Supporting the NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM), P.O. Box 13501, Research Triangle Park, NC 27709 (United States); Andrade, Carolina H. [Laboratory of Molecular Modeling and Design, Faculty of Pharmacy, Federal University of Goiás, Goiânia, GO 74605-220 (Brazil); Tropsha, Alexander, E-mail: alex_tropsha@unc.edu [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States)

    2015-04-15

    Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using Random Forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers was 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR Toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the Scorecard database of possible skin or sense organ toxicants as primary candidates for experimental validation. - Highlights: • It was compiled the largest publicly-available skin sensitization dataset. • Predictive QSAR models were developed for skin sensitization. • Developed models have higher prediction accuracy than OECD QSAR Toolbox. • Putative

  16. Site-specific seismic-hazard maps and deaggregation in the western United States using the NGA models for ground-motion prediction

    Science.gov (United States)

    Harmsen, Stephen

    2011-01-01

    The 2008 National Seismic Hazard Mapping Project (NSHMP) update for the conterminous United States employs several new ground-motion prediction equations which include modern empirical models of linear and nonlinear site response to local and regional earthquakes. The recent availability of attenuation functions incorporating site conditions via Vs30 values permits the calculation of site-specific hazard maps for a wide range of spectral accelerations. I compare alternative site specific hazard maps using Vs30 values estimated according to the methods of Wills and Clahan (2006), Wald and Allen (2007), and Yong and others (in press). These maps are presented for 5-hertz (Hz) and 3-second spectral accelerations having 2 percent probability of exceedance in 50 years for central California and the western part of southern California.

  17. GIS Based Landslide Hazard Mapping Prediction in Ulu Klang, Malaysia

    Directory of Open Access Journals (Sweden)

    Mukhlisin Muhammad

    2010-09-01

    Full Text Available Since 1993, a number of landslides have been reported in Ulu Klang, Malaysia. These landslides caused fatalities and economic losses. Most of these landslides occurred in man-made slopes. Geographical Information System (GIS is proposed to be used as the based machine for the production of landslide hazard map. This study highlights the area based landslide hazard assessment at Ulu Klang area using GIS application in order to help the engineer or the town planner to identify the most suitable development area besides predicting the potential landslide hazard area. Four main factors that influence of landslide occurrence were chosen include slope gradient aspect, geology, surface cover/land used and precipitation distribution. Landslide hazardous areas were analyzed and mapped using GIS application and produced a hazard map with five different indexes (i.e., very low, low, medium, high and very high hazard. The results of the analysis were verified using the landslide location data. The result showed that the model was very suitable in predicting landslide hazard and generating landslide hazard maps.

  18. Predictive modeling of hazardous waste landfill total above-ground biomass using passive optical and LIDAR remotely sensed data

    Science.gov (United States)

    Hadley, Brian Christopher

    This dissertation assessed remotely sensed data and geospatial modeling technique(s) to map the spatial distribution of total above-ground biomass present on the surface of the Savannah River National Laboratory's (SRNL) Mixed Waste Management Facility (MWMF) hazardous waste landfill. Ordinary least squares (OLS) regression, regression kriging, and tree-structured regression were employed to model the empirical relationship between in-situ measured Bahia (Paspalum notatum Flugge) and Centipede [Eremochloa ophiuroides (Munro) Hack.] grass biomass against an assortment of explanatory variables extracted from fine spatial resolution passive optical and LIDAR remotely sensed data. Explanatory variables included: (1) discrete channels of visible, near-infrared (NIR), and short-wave infrared (SWIR) reflectance, (2) spectral vegetation indices (SVI), (3) spectral mixture analysis (SMA) modeled fractions, (4) narrow-band derivative-based vegetation indices, and (5) LIDAR derived topographic variables (i.e. elevation, slope, and aspect). Results showed that a linear combination of the first- (1DZ_DGVI), second- (2DZ_DGVI), and third-derivative of green vegetation indices (3DZ_DGVI) calculated from hyperspectral data recorded over the 400--960 nm wavelengths of the electromagnetic spectrum explained the largest percentage of statistical variation (R2 = 0.5184) in the total above-ground biomass measurements. In general, the topographic variables did not correlate well with the MWMF biomass data, accounting for less than five percent of the statistical variation. It was concluded that tree-structured regression represented the optimum geospatial modeling technique due to a combination of model performance and efficiency/flexibility factors.

  19. Radon monitoring and hazard prediction in Ireland

    Science.gov (United States)

    Elio, Javier; Crowley, Quentin; Scanlon, Ray; Hodgson, Jim; Cooper, Mark; Long, Stephanie

    2016-04-01

    Radon is a naturally occurring radioactive gas which forms as a decay product from uranium. It is the largest source of natural ionizing radiation affecting the global population. When radon is inhaled, its short-lived decay products can interact with lung tissue leading to DNA damage and development of lung cancer. Ireland has among the highest levels of radon in Europe and eighth highest of an OECD survey of 29 countries. Every year some two hundred and fifty cases of lung cancer in Ireland are linked to radon exposure. This new research project will build upon previous efforts of radon monitoring in Ireland to construct a high-resolution radon hazard map. This will be achieved using recently available high-resolution airborne gamma-ray spectrometry (radiometric) and soil geochemistry data (http://www.tellus.ie/), indoor radon concentrations (http://www.epa.ie/radiation), and new direct measurement of soil radon. In this regard, legacy indoor radon concentrations will be correlated with soil U and Th concentrations and other geogenic data. This is a new approach since the vast majority of countries with a national radon monitoring programme rely on indoor radon measurements, or have a spatially limited dataset of soil radon measurements. Careful attention will be given to areas where an indicative high radon hazard based on geogenic factors does not match high indoor radon concentrations. Where such areas exist, it may imply that some parameter(s) in the predictive model does not match that of the environment. These areas will be subjected to measurement of radon soil gas using a combination of time averaged (passive) and time dependant (active) measurements in order to better understand factors affecting production, transport and accumulation of radon in the natural environment. Such mapping of radon-prone areas will ultimately help to inform when prevention and remediation measures are necessary, reducing the radon exposure of the population. Therefore, given

  20. A flexible additive multiplicative hazard model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas H.

    2002-01-01

    Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect......Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect...

  1. Predictive accuracy of novel risk factors and markers: A simulation study of the sensitivity of different performance measures for the Cox proportional hazards regression model.

    Science.gov (United States)

    Austin, Peter C; Pencinca, Michael J; Steyerberg, Ewout W

    2017-06-01

    Predicting outcomes that occur over time is important in clinical, population health, and health services research. We compared changes in different measures of performance when a novel risk factor or marker was added to an existing Cox proportional hazards regression model. We performed Monte Carlo simulations for common measures of performance: concordance indices ( c, including various extensions to survival outcomes), Royston's D index, R2-type measures, and Chambless' adaptation of the integrated discrimination improvement to survival outcomes. We found that the increase in performance due to the inclusion of a risk factor tended to decrease as the performance of the reference model increased. Moreover, the increase in performance increased as the hazard ratio or the prevalence of a binary risk factor increased. Finally, for the concordance indices and R2-type measures, the absolute increase in predictive accuracy due to the inclusion of a risk factor was greater when the observed event rate was higher (low censoring). Amongst the different concordance indices, Chambless and Diao's c-statistic exhibited the greatest increase in predictive accuracy when a novel risk factor was added to an existing model. Amongst the different R2-type measures, O'Quigley et al.'s modification of Nagelkerke's R2 index and Kent and O'Quigley's [Formula: see text] displayed the greatest sensitivity to the addition of a novel risk factor or marker. These methods were then applied to a cohort of 8635 patients hospitalized with heart failure to examine the added benefit of a point-based scoring system for predicting mortality after initial adjustment with patient age alone.

  2. Artificial neural networks versus proportional hazards Cox models to predict 45-year all-cause mortality in the Italian Rural Areas of the Seven Countries Study.

    Science.gov (United States)

    Puddu, Paolo Emilio; Menotti, Alessandro

    2012-07-23

    Projection pursuit regression, multilayer feed-forward networks, multivariate adaptive regression splines and trees (including survival trees) have challenged classic multivariable models such as the multiple logistic function, the proportional hazards life table Cox model (Cox), the Poisson's model, and the Weibull's life table model to perform multivariable predictions. However, only artificial neural networks (NN) have become popular in medical applications. We compared several Cox versus NN models in predicting 45-year all-cause mortality (45-ACM) by 18 risk factors selected a priori: age; father life status; mother life status; family history of cardiovascular diseases; job-related physical activity; cigarette smoking; body mass index (linear and quadratic terms); arm circumference; mean blood pressure; heart rate; forced expiratory volume; serum cholesterol; corneal arcus; diagnoses of cardiovascular diseases, cancer and diabetes; minor ECG abnormalities at rest. Two Italian rural cohorts of the Seven Countries Study, made up of men aged 40 to 59 years, enrolled and first examined in 1960 in Italy. Cox models were estimated by: a) forcing all factors; b) a forward-; and c) a backward-stepwise procedure. Observed cases of deaths and of survivors were computed in decile classes of estimated risk. Forced and stepwise NN were run and compared by C-statistics (ROC analysis) with the Cox models. Out of 1591 men, 1447 died. Model global accuracies were extremely high by all methods (ROCs > 0.810) but there was no clear-cut superiority of any model to predict 45-ACM. The highest ROCs (> 0.838) were observed by NN. There were inter-model variations to select predictive covariates: whereas all models concurred to define the role of 10 covariates (mainly cardiovascular risk factors), family history, heart rate and minor ECG abnormalities were not contributors by Cox models but were so by forced NN. Forced expiratory volume and arm circumference (two protectors), were

  3. Artificial neural networks versus proportional hazards Cox models to predict 45-year all-cause mortality in the Italian Rural Areas of the Seven Countries Study

    Directory of Open Access Journals (Sweden)

    Puddu Paolo

    2012-07-01

    Full Text Available Abstract Background Projection pursuit regression, multilayer feed-forward networks, multivariate adaptive regression splines and trees (including survival trees have challenged classic multivariable models such as the multiple logistic function, the proportional hazards life table Cox model (Cox, the Poisson’s model, and the Weibull’s life table model to perform multivariable predictions. However, only artificial neural networks (NN have become popular in medical applications. Results We compared several Cox versus NN models in predicting 45-year all-cause mortality (45-ACM by 18 risk factors selected a priori: age; father life status; mother life status; family history of cardiovascular diseases; job-related physical activity; cigarette smoking; body mass index (linear and quadratic terms; arm circumference; mean blood pressure; heart rate; forced expiratory volume; serum cholesterol; corneal arcus; diagnoses of cardiovascular diseases, cancer and diabetes; minor ECG abnormalities at rest. Two Italian rural cohorts of the Seven Countries Study, made up of men aged 40 to 59 years, enrolled and first examined in 1960 in Italy. Cox models were estimated by: a forcing all factors; b a forward-; and c a backward-stepwise procedure. Observed cases of deaths and of survivors were computed in decile classes of estimated risk. Forced and stepwise NN were run and compared by C-statistics (ROC analysis with the Cox models. Out of 1591 men, 1447 died. Model global accuracies were extremely high by all methods (ROCs > 0.810 but there was no clear-cut superiority of any model to predict 45-ACM. The highest ROCs (> 0.838 were observed by NN. There were inter-model variations to select predictive covariates: whereas all models concurred to define the role of 10 covariates (mainly cardiovascular risk factors, family history, heart rate and minor ECG abnormalities were not contributors by Cox models but were so by forced NN. Forced expiratory volume and arm

  4. Submarine landslides: processes, triggers and hazard prediction.

    Science.gov (United States)

    Masson, D G; Harbitz, C B; Wynn, R B; Pedersen, G; Løvholt, F

    2006-08-15

    Huge landslides, mobilizing hundreds to thousands of km(3) of sediment and rock are ubiquitous in submarine settings ranging from the steepest volcanic island slopes to the gentlest muddy slopes of submarine deltas. Here, we summarize current knowledge of such landslides and the problems of assessing their hazard potential. The major hazards related to submarine landslides include destruction of seabed infrastructure, collapse of coastal areas into the sea and landslide-generated tsunamis. Most submarine slopes are inherently stable. Elevated pore pressures (leading to decreased frictional resistance to sliding) and specific weak layers within stratified sequences appear to be the key factors influencing landslide occurrence. Elevated pore pressures can result from normal depositional processes or from transient processes such as earthquake shaking; historical evidence suggests that the majority of large submarine landslides are triggered by earthquakes. Because of their tsunamigenic potential, ocean-island flank collapses and rockslides in fjords have been identified as the most dangerous of all landslide related hazards. Published models of ocean-island landslides mainly examine 'worst-case scenarios' that have a low probability of occurrence. Areas prone to submarine landsliding are relatively easy to identify, but we are still some way from being able to forecast individual events with precision. Monitoring of critical areas where landslides might be imminent and modelling landslide consequences so that appropriate mitigation strategies can be developed would appear to be areas where advances on current practice are possible.

  5. Computer Model Locates Environmental Hazards

    Science.gov (United States)

    2008-01-01

    Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.

  6. Processing LiDAR Data to Predict Natural Hazards

    Science.gov (United States)

    Fairweather, Ian; Crabtree, Robert; Hager, Stacey

    2008-01-01

    ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.

  7. Climate Prediction Center (CPC) U.S. Hazards Outlook

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Climate Prediction Center releases a US Hazards Outlook daily, Monday through Friday. The product highlights regions of anticipated hazardous weather during the...

  8. Gradient lasso for Cox proportional hazards model.

    Science.gov (United States)

    Sohn, Insuk; Kim, Jinseog; Jung, Sin-Ho; Park, Changyi

    2009-07-15

    There has been an increasing interest in expressing a survival phenotype (e.g. time to cancer recurrence or death) or its distribution in terms of a subset of the expression data of a subset of genes. Due to high dimensionality of gene expression data, however, there is a serious problem of collinearity in fitting a prediction model, e.g. Cox's proportional hazards model. To avoid the collinearity problem, several methods based on penalized Cox proportional hazards models have been proposed. However, those methods suffer from severe computational problems, such as slow or even failed convergence, because of high-dimensional matrix inversions required for model fitting. We propose to implement the penalized Cox regression with a lasso penalty via the gradient lasso algorithm that yields faster convergence to the global optimum than do other algorithms. Moreover the gradient lasso algorithm is guaranteed to converge to the optimum under mild regularity conditions. Hence, our gradient lasso algorithm can be a useful tool in developing a prediction model based on high-dimensional covariates including gene expression data. Results from simulation studies showed that the prediction model by gradient lasso recovers the prognostic genes. Also results from diffuse large B-cell lymphoma datasets and Norway/Stanford breast cancer dataset indicate that our method is very competitive compared with popular existing methods by Park and Hastie and Goeman in its computational time, prediction and selectivity. R package glcoxph is available at http://datamining.dongguk.ac.kr/R/glcoxph.

  9. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches; Modelisation de la rupture sismique, prediction du mouvement fort, et evaluation de l'alea sismique: approches fondamentale et appliquee

    Energy Technology Data Exchange (ETDEWEB)

    Berge-Thierry, C

    2007-05-15

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  10. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  11. Assessment and Prediction of Natural Hazards from Satellite Imagery.

    Science.gov (United States)

    Gillespie, Thomas W; Chu, Jasmine; Frankenberg, Elizabeth; Thomas, Duncan

    2007-10-01

    Since 2000, there have been a number of spaceborne satellites that have changed the way we assess and predict natural hazards. These satellites are able to quantify physical geographic phenomena associated with the movements of the earth's surface (earthquakes, mass movements), water (floods, tsunamis, storms), and fire (wildfires). Most of these satellites contain active or passive sensors that can be utilized by the scientific community for the remote sensing of natural hazards over a number of spatial and temporal scales. The most useful satellite imagery for the assessment of earthquake damage comes from high-resolution (0.6 m to 1 m pixel size) passive sensors and moderate resolution active sensors that can quantify the vertical and horizontal movement of the earth's surface. High-resolution passive sensors have been used to successfully assess flood damage while predictive maps of flood vulnerability areas are possible based on physical variables collected from passive and active sensors. Recent moderate resolution sensors are able to provide near real time data on fires and provide quantitative data used in fire behavior models. Limitations currently exist due to atmospheric interference, pixel resolution, and revisit times. However, a number of new microsatellites and constellations of satellites will be launched in the next five years that contain increased resolution (0.5 m to 1 m pixel resolution for active sensors) and revisit times (daily ≤ 2.5 m resolution images from passive sensors) that will significantly improve our ability to assess and predict natural hazards from space.

  12. Proportional hazards modeling of saccadic response times during reading.

    Science.gov (United States)

    Nilsson, Mattias; Nivre, Joakim

    2013-07-01

    In this article we use proportional hazards models to examine how low-level processes affect the probability of making a saccade over time, through the period of fixation, during reading. We apply the Cox proportional hazards model to investigate how launch distance (relative to word beginning), fixation location (relative to word center), and word frequency affect the hazard of a saccadic response. This model requires that covariates have a constant impact on the hazard over time, the assumption of proportional hazards. We show that this assumption is not supported. The impact of the covariates changes with the time passed since fixation onset. To account for the non-proportional hazards we fit step functions of time, resulting in a model with time-varying effects on the hazard. We evaluate the ability to predict the timing of saccades on held-out fixation data. The model with time-varying effects performs better in predicting the timing of saccades for fixations as short as 100 ms and as long as 500 ms, when compared both to a baseline model without covariates and a model which assumes constant covariate effects. This result suggests that the time-varying effects model better recovers the time course of low-level processes that influence the decision to move the eyes. Copyright © 2013 Cognitive Science Society, Inc.

  13. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    Directory of Open Access Journals (Sweden)

    Ruimin Li

    2014-01-01

    Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  14. Uncertainty in natural hazards, modeling and decision support: An introduction to this volume [Chapter 1

    Science.gov (United States)

    Karin Riley; Matthew Thompson; Peter Webley; Kevin D. Hyde

    2017-01-01

    Modeling has been used to characterize and map natural hazards and hazard susceptibility for decades. Uncertainties are pervasive in natural hazards analysis, including a limited ability to predict where and when extreme events will occur, with what consequences, and driven by what contributing factors. Modeling efforts are challenged by the intrinsic...

  15. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...

  16. Accelerated Hazards Mixture Cure Model

    Science.gov (United States)

    Zhang, Jiajia; Peng, Yingwei

    2010-01-01

    We propose a new cure model for survival data with a surviving or cure fraction. The new model is a mixture cure model where the covariate effects on the proportion of cure and the distribution of the failure time of uncured patients are separately modeled. Unlike the existing mixture cure models, the new model allows covariate effects on the failure time distribution of uncured patients to be negligible at time zero and to increase as time goes by. Such a model is particularly useful in some cancer treatments when the treat effect increases gradually from zero, and the existing models usually cannot handle this situation properly. We develop a rank based semiparametric estimation method to obtain the maximum likelihood estimates of the parameters in the model. We compare it with existing models and methods via a simulation study, and apply the model to a breast cancer data set. The numerical studies show that the new model provides a useful addition to the cure model literature. PMID:19697127

  17. Hazard Warning: model misuse ahead

    DEFF Research Database (Denmark)

    Dickey-Collas, M.; Payne, Mark; Trenkel, V.

    2014-01-01

    -users (i.e. mangers or policy developers). The combination of attributes leads to models that are considered to have empirical, mechanistic, or analytical characteristics, but not a combination of them. In fisheries science, many examples can be found of models with these characteristics. However, we......The use of modelling approaches in marine science, and in particular fisheries science, is explored. We highlight that the choice of model used for an analysis should account for the question being posed or the context of the management problem. We examine a model-classification scheme based...... on Richard Levins' 1966 work suggesting that models can only achieve two of three desirable model attributes: realism, precision, and generality. Model creation, therefore, requires trading-off of one of these attributes in favour of the other two: however, this is often in conflict with the desires of end...

  18. Applied the additive hazard model to predict the survival time of patient with diffuse large B- cell lymphoma and determine the effective genes, using microarray data

    Directory of Open Access Journals (Sweden)

    Arefa Jafarzadeh Kohneloo

    2015-09-01

    Full Text Available Background: Recent studies have shown that effective genes on survival time of cancer patients play an important role as a risk factor or preventive factor. Present study was designed to determine effective genes on survival time for diffuse large B-cell lymphoma patients and predict the survival time using these selected genes. Materials & Methods: Present study is a cohort study was conducted on 40 patients with diffuse large B-cell lymphoma. For these patients, 2042 gene expression was measured. In order to predict the survival time, the composition of the semi-parametric additive survival model with two gene selection methods elastic net and lasso were used. Two methods were evaluated by plotting area under the ROC curve over time and calculating the integral of this curve. Results: Based on our findings, the elastic net method identified 10 genes, and Lasso-Cox method identified 7 genes. GENE3325X increased the survival time (P=0.006, Whereas GENE3980X and GENE377X reduced the survival time (P=0.004. These three genes were selected as important genes in both methods. Conclusion: This study showed that the elastic net method outperformed the common Lasso method in terms of predictive power. Moreover, apply the additive model instead Cox regression and using microarray data is usable way for predict the survival time of patients.

  19. A high-resolution global flood hazard model

    Science.gov (United States)

    Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.

    2015-09-01

    Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.

  20. A high-resolution global flood hazard model.

    Science.gov (United States)

    Sampson, Christopher C; Smith, Andrew M; Bates, Paul D; Neal, Jeffrey C; Alfieri, Lorenzo; Freer, Jim E

    2015-09-01

    Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ∼90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ∼1 km, mean absolute error in flooded fraction falls to ∼5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.

  1. Virtual Research Environments for Natural Hazard Modelling

    Science.gov (United States)

    Napier, Hazel; Aldridge, Tim

    2017-04-01

    The Natural Hazards Partnership (NHP) is a group of 17 collaborating public sector organisations providing a mechanism for co-ordinated advice to government and agencies responsible for civil contingency and emergency response during natural hazard events. The NHP has set up a Hazard Impact Model (HIM) group tasked with modelling the impact of a range of UK hazards with the aim of delivery of consistent hazard and impact information. The HIM group consists of 7 partners initially concentrating on modelling the socio-economic impact of 3 key hazards - surface water flooding, land instability and high winds. HIM group partners share scientific expertise and data within their specific areas of interest including hydrological modelling, meteorology, engineering geology, GIS, data delivery, and modelling of socio-economic impacts. Activity within the NHP relies on effective collaboration between partners distributed across the UK. The NHP are acting as a use case study for a new Virtual Research Environment (VRE) being developed by the EVER-EST project (European Virtual Environment for Research - Earth Science Themes: a solution). The VRE is allowing the NHP to explore novel ways of cooperation including improved capabilities for e-collaboration, e-research, automation of processes and e-learning. Collaboration tools are complemented by the adoption of Research Objects, semantically rich aggregations of resources enabling the creation of uniquely identified digital artefacts resulting in reusable science and research. Application of the Research Object concept to HIM development facilitates collaboration, by encapsulating scientific knowledge in a shareable format that can be easily shared and used by partners working on the same model but within their areas of expertise. This paper describes the application of the VRE to the NHP use case study. It outlines the challenges associated with distributed partnership working and how they are being addressed in the VRE. A case

  2. Hazard of pharmaceuticals for aquatic environment: Prioritization by structural approaches and prediction of ecotoxicity.

    Science.gov (United States)

    Sangion, Alessandro; Gramatica, Paola

    2016-10-01

    Active Pharmaceutical Ingredients (APIs) are recognized as Contaminants of Emerging Concern (CEC) since they are detected in the environment in increasing amount, mainly in aquatic compartment, where they may be hazardous for wildlife. The huge lack of experimental data for a large number of end-points requires tools able to quickly highlight the potentially most hazardous and toxic pharmaceuticals, focusing experiments on the prioritized compounds. In silico tools, like QSAR (Quantitative Structure-Activity Relationship) models based on structural molecular descriptors, can predict missing data for toxic end-points necessary to prioritize existing, or even not yet synthesized chemicals for their potential hazard. In the present study, new externally validated QSAR models, specific to predict acute toxicity of APIs in key organisms of the three main aquatic trophic levels, i.e. algae, Daphnia and two species of fish, were developed using the QSARINS software. These Multiple Linear regressions - Ordinary Least Squares (MLR-OLS) models are based on theoretical molecular descriptors calculated by free PaDEL-Descriptor software and selected by Genetic Algorithm. The models are statistically robust, externally predictive and characterized by a wide structural applicability domain. They were applied to predict acute toxicity for a large set of APIs without experimental data. Then predictions were processed by Principal Component Analysis (PCA) and a trend, driven by the combination of toxicities for all the studied organisms, was highlighted. This trend, named Aquatic Toxicity Index (ATI), allowed the raking of pharmaceuticals according to their potential toxicity upon the whole aquatic environment. Finally a QSAR model for the prediction of this Aquatic Toxicity Index (ATI) was proposed to be applicable in QSARINS for the screening of existing APIs for their potential hazard and the a priori chemical design of not environmentally hazardous APIs. Copyright © 2016

  3. Experimental Concepts for Testing Seismic Hazard Models

    Science.gov (United States)

    Marzocchi, W.; Jordan, T. H.

    2015-12-01

    Seismic hazard analysis is the primary interface through which useful information about earthquake rupture and wave propagation is delivered to society. To account for the randomness (aleatory variability) and limited knowledge (epistemic uncertainty) of these natural processes, seismologists must formulate and test hazard models using the concepts of probability. In this presentation, we will address the scientific objections that have been raised over the years against probabilistic seismic hazard analysis (PSHA). Owing to the paucity of observations, we must rely on expert opinion to quantify the epistemic uncertainties of PSHA models (e.g., in the weighting of individual models from logic-tree ensembles of plausible models). The main theoretical issue is a frequentist critique: subjectivity is immeasurable; ergo, PSHA models cannot be objectively tested against data; ergo, they are fundamentally unscientific. We have argued (PNAS, 111, 11973-11978) that the Bayesian subjectivity required for casting epistemic uncertainties can be bridged with the frequentist objectivity needed for pure significance testing through "experimental concepts." An experimental concept specifies collections of data, observed and not yet observed, that are judged to be exchangeable (i.e., with a joint distribution independent of the data ordering) when conditioned on a set of explanatory variables. We illustrate, through concrete examples, experimental concepts useful in the testing of PSHA models for ontological errors in the presence of aleatory variability and epistemic uncertainty. In particular, we describe experimental concepts that lead to exchangeable binary sequences that are statistically independent but not identically distributed, showing how the Bayesian concept of exchangeability generalizes the frequentist concept of experimental repeatability. We also address the issue of testing PSHA models using spatially correlated data.

  4. Toward Building a New Seismic Hazard Model for Mainland China

    Science.gov (United States)

    Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.

    2015-12-01

    At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.

  5. A modeling framework for investment planning in interdependent infrastructures in multi-hazard environments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.; Nozick, Linda Karen; Prince, Michael

    2013-09-01

    Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research & Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorists actions based on assumptions regarding their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.

  6. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... Discussions on some of the design techniques based on. MPC and their .... is then calculated using the receding horizon concept, since the prediction ...... of interior point methods to model predictive control,. Journal of ...

  7. Further Results on Dynamic Additive Hazard Rate Model

    Directory of Open Access Journals (Sweden)

    Zhengcheng Zhang

    2014-01-01

    Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.

  8. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... written to simulate an example of a randomly generated system. This paper can serve as tutorial to anyone interested in this area of research. Keywords: model predictive control, linear systems, discrete-time systems, constraints, quadratic programming. 1. Introduction. Model Predictive Control (MPC), also ...

  9. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  10. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... the partial least squares regression method. It turns out that it is naturally adapted to this setting via the so-called Krylov sequence. The resulting PLS estimator is shown to be consistent provided that the number of terms included is taken to be equal to the number of relevant components in the regression...

  11. Bangladesh Delta: Assessment of the Causes of Sea-level Rise Hazards and Integrated Development of Predictive Modeling Towards Mitigation and Adaptation (BanD-AID)

    Science.gov (United States)

    Kusche, J.; Shum, C. K.; Jenkins, C. J.; Chen, J.; Guo, J.; Hossain, F.; Braun, B.; Calmant, S.; Ballu, V.; Papa, F.; Kuhn, M.; Ahmed, R.; Khan, Z. H.; Hossain, M.; Bernzen, A.; Dai, C.; Jia, Y.; Krien, Y.; Kuo, C. Y.; Liibusk, A.; Shang, K.; Testut, L.; Tseng, K. H.; Uebbing, B.; Rietbroek, R.; Valty, P.; Wan, J.

    2016-12-01

    As a low-lying and the largest coastal deltaic region in the world, Bangladesh already faces tremendous vulnerability. Accelerated sea-level rise, along with tectonic, sediment load and groundwater extraction induced land uplift/subsidence, have exacerbated Bangladesh's coastal vulnerability. Climate change has further intensified these risks with increasing temperatures, greater rainfall volatility, and increased incidence of intensified cyclones, in addition to its seasonal transboundary monsoonal flooding. Our Belmont Forum/IGFA G8 project BanD-AiD, http://Belmont-BanDAiD.org, or http://Blemont-SeaLevel.org, comprises of an international cross-disciplinary team including stakeholders in Bangladesh, aims at a joint assessment of the physical and social science knowledge of the physical and social dynamics which govern coastal vulnerability and societal resilience in Bangladesh. We have built a prototype observational system, following the Belmont Challenge identified Earth System Analysis & Prediction System (ESAPS) for the Bangladesh Delta, to achieve the physical science objectives of the project. The prototype observational system is exportable to other regions of the world. We studied the physical causes of relative sea-level rise in coastal Bangladesh, with the goal to separate and quantify land subsidence and geocentric sea-level rise signals at adequate spatial scales using contemporary space geodetic and remote sensing data. We used a social and natural science integrative approach to investigate the various social and economic drivers behind land use change, population increase migration and community resilience to understand the social dynamics of this complex region and to forecast likely and alternative scenarios for maintaining the societal resilience of this vital region which currently houses a quarter of Bangladesh's 160 million people.

  12. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  13. Limitations of Cox Proportional Hazards Analysis in Mortality Prediction of Patients with Acute Coronary Syndrome

    Directory of Open Access Journals (Sweden)

    Babińska Magdalena

    2015-12-01

    Full Text Available The aim of this study was to evaluate the possibility of incorrect assessment of mortality risk factors in a group of patients affected by acute coronary syndrome, due to the lack of hazard proportionality in the Cox regression model. One hundred and fifty consecutive patients with acute coronary syndrome (ACS and no age limit were enrolled. Univariable and multivariable Cox proportional hazard analyses were performed. The proportional hazard assumptions were verified using Schoenfeld residuals, χ2 test and rank correlation coefficient t between residuals and time. In the total group of 150 patients, 33 (22.0% deaths from any cause were registered in the follow-up time period of 64 months. The non-survivors were significantly older and had increased prevalence of diabetes and erythrocyturia, longer history of coronary artery disease, higher concentrations of serum creatinine, cystatin C, uric acid, glucose, C-reactive protein (CRP, homocysteine and B-type natriuretic peptide (NT-proBNP, and lower concentrations of serum sodium. No significant differences in echocardiography parameters were observed between groups. The following factors were risk of death factors and fulfilled the proportional hazard assumption in the univariable model: smoking, occurrence of diabetes and anaemia, duration of coronary artery disease, and abnormal serum concentrations of uric acid, sodium, homocysteine, cystatin C and NT-proBNP, while in the multivariable model, the risk of death factors were: smoking and elevated concentrations of homocysteine and NT-proBNP. The study has demonstrated that violation of the proportional hazard assumption in the Cox regression model may lead to creating a false model that does not include only time-independent predictive factors.

  14. Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions

    Science.gov (United States)

    De Risi, Raffaele; Goda, Katsuichiro

    2017-08-01

    Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.

  15. Uncertainties in Predicting Debris Flow Hazards Following Wildfire

    NARCIS (Netherlands)

    Hyde, K.D.; Riley, Karin; Stoof, C.R.

    2016-01-01

    Wildfire increases the probability of debris flows posing hazardous conditions where values-at-risk exist downstream of burned areas. Conditions and processes leading to postfire debris flows usually follow a general sequence defined here as the postfire debris flow hazard cascade: biophysical

  16. Natural hazard modeling and uncertainty analysis [Chapter 2

    Science.gov (United States)

    Matthew Thompson; Jord J. Warmink

    2017-01-01

    Modeling can play a critical role in assessing and mitigating risks posed by natural hazards. These modeling efforts generally aim to characterize the occurrence, intensity, and potential consequences of natural hazards. Uncertainties surrounding the modeling process can have important implications for the development, application, evaluation, and interpretation of...

  17. Coordinate descent methods for the penalized semiparametric additive hazards model

    DEFF Research Database (Denmark)

    Gorst-Rasmussen, Anders; Scheike, Thomas

    . The semiparametric additive hazards model is a flexible alternative which is a natural survival analogue of the standard linear regression model. Building on this analogy, we develop a cyclic coordinate descent algorithm for fitting the lasso and elastic net penalized additive hazards model. The algorithm requires...

  18. Coordinate descent methods for the penalized semiprarametric additive hazard model

    DEFF Research Database (Denmark)

    Gorst-Rasmussen, Anders; Scheike, Thomas

    2012-01-01

    . The semiparametric additive hazards model is a flexible alternative which is a natural survival analogue of the standard linear regression model. Building on this analogy, we develop a cyclic coordinate descent algorithm for fitting the lasso and elastic net penalized additive hazards model. The algorithm requires...

  19. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  20. Predicting Cumulative Incidence Probability: Marginal and Cause-Specific Modelling

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2005-01-01

    cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling......cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling...

  1. Landslide hazard assessment using digital elevation models

    National Research Council Canada - National Science Library

    Fenton, Gordon A; McLean, Amanda; Nadim, Farrokh; Griffiths, D.V

    2013-01-01

    .... A landslide hazard assessment framework capable of estimating regional probabilities of slope failure can be used to aid a vast number of communities currently living in landslide “danger zones...

  2. Prediction and Prevention of Chemical Reaction Hazards: Learning by Simulation.

    Science.gov (United States)

    Shacham, Mordechai; Brauner, Neima; Cutlip, Michael B.

    2001-01-01

    Points out that chemical hazards are the major cause of accidents in chemical industry and describes a safety teaching approach using a simulation. Explains a problem statement on exothermic liquid-phase reactions. (YDS)

  3. Geospatial subsidence hazard modelling at Sterkfontein Caves ...

    African Journals Online (AJOL)

    This paper covers a GIS1 approach to identifying hazardous areas at the Sterkfontein Caves. It makes a contribution to risk assessment of land with shallow caves underneath it. The aim of the study is to ensure public safety in a concentrated area frequently visited by the public and is part of a programme to identify ...

  4. Deterministic slope failure hazard assessment in a model catchment and its replication in neighbourhood terrain

    Directory of Open Access Journals (Sweden)

    Kiran Prasad Acharya

    2016-01-01

    Full Text Available In this work, we prepare and replicate a deterministic slope failure hazard model in small-scale catchments of tertiary sedimentary terrain of Niihama city in western Japan. It is generally difficult to replicate a deterministic model from one catchment to another due to lack of exactly similar geo-mechanical and hydrological parameters. To overcome this problem, discriminant function modelling was done with the deterministic slope failure hazard model and the DEM-based causal factors of slope failure, which yielded an empirical parametric relationship or a discriminant function equation. This parametric relationship was used to predict the slope failure hazard index in a total of 40 target catchments in the study area. From ROC plots, the prediction rate between 0.719–0.814 and 0.704–0.805 was obtained with inventories of September and October slope failures, respectively. This means September slope failures were better predicted than October slope failures by approximately 1%. The results show that the prediction of the slope failure hazard index is possible, even in a small catchment scale, in similar geophysical settings. Moreover, the replication of the deterministic model through discriminant function modelling was found to be successful in predicting typhoon rainfall-induced slope failures with moderate to good accuracy without any use of geo-mechanical and hydrological parameters.

  5. Freeze Prediction Model

    Science.gov (United States)

    Morrow, C. T. (Principal Investigator)

    1981-01-01

    Measurements of wind speed, net irradiation, and of air, soil, and dew point temperatures in an orchard at the Rock Springs Agricultural Research Center, as well as topographical and climatological data and a description of the major apple growing regions of Pennsylvania were supplied to the University of Florida for use in running the P-model, freeze prediction program. Results show that the P-model appears to have considerable applicability to conditions in Pennsylvania. Even though modifications may have to be made for use in the fruit growing regions, there are advantages for fruit growers with the model in its present form.

  6. Predicting the onset of hazardous alcohol drinking in primary care: development and validation of a simple risk algorithm.

    Science.gov (United States)

    Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia

    2017-04-01

    Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The 'predictAL-10' risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the 'predictAL-9'), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. © British Journal of General Practice 2017.

  7. a model based on crowsourcing for detecting natural hazards

    Science.gov (United States)

    Duan, J.; Ma, C.; Zhang, J.; Liu, S.; Liu, J.

    2015-12-01

    Remote Sensing Technology provides a new method for the detecting,early warning,mitigation and relief of natural hazards. Given the suddenness and the unpredictability of the location of natural hazards as well as the actual demands for hazards work, this article proposes an evaluation model for remote sensing detecting of natural hazards based on crowdsourcing. Firstly, using crowdsourcing model and with the help of the Internet and the power of hundreds of millions of Internet users, this evaluation model provides visual interpretation of high-resolution remote sensing images of hazards area and collects massive valuable disaster data; secondly, this evaluation model adopts the strategy of dynamic voting consistency to evaluate the disaster data provided by the crowdsourcing workers; thirdly, this evaluation model pre-estimates the disaster severity with the disaster pre-evaluation model based on regional buffers; lastly, the evaluation model actuates the corresponding expert system work according to the forecast results. The idea of this model breaks the boundaries between geographic information professionals and the public, makes the public participation and the citizen science eventually be realized, and improves the accuracy and timeliness of hazards assessment results.

  8. Potential of weight of evidence modelling for gully erosion hazard assessment in Mbire District - Zimbabwe

    Science.gov (United States)

    Dube, F.; Nhapi, I.; Murwira, A.; Gumindoga, W.; Goldin, J.; Mashauri, D. A.

    Gully erosion is an environmental concern particularly in areas where landcover has been modified by human activities. This study assessed the extent to which the potential of gully erosion could be successfully modelled as a function of seven environmental factors (landcover, soil type, distance from river, distance from road, Sediment Transport Index (STI), Stream Power Index (SPI) and Wetness Index (WI) using a GIS-based Weight of Evidence Modelling (WEM) in the Mbire District of Zimbabwe. Results show that out of the studied seven factors affecting gully erosion, five were significantly correlated (p road were not significantly correlated to gully occurrence (p > 0.05). A gully erosion hazard map showed that 78% of the very high hazard class area is within a distance of 250 m from rivers. Model validation indicated that 70% of the validation set of gullies were in the high hazard and very high hazard class. The resulting map of areas susceptible to gully erosion has a prediction accuracy of 67.8%. The predictive capability of the weight of evidence model in this study suggests that landcover, soil type, distance from river, STI and SPI are useful in creating a gully erosion hazard map but may not be sufficient to produce a valid map of gully erosion hazard.

  9. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  10. Modelling multi-hazard hurricane damages on an urbanized coast with a Bayesian Network approach

    Science.gov (United States)

    van Verseveld, H.C.W.; Van Dongeren, A. R.; Plant, Nathaniel G.; Jäger, W.S.; den Heijer, C.

    2015-01-01

    Hurricane flood impacts to residential buildings in coastal zones are caused by a number of hazards, such as inundation, overflow currents, erosion, and wave attack. However, traditional hurricane damage models typically make use of stage-damage functions, where the stage is related to flooding depth only. Moreover, these models are deterministic and do not consider the large amount of uncertainty associated with both the processes themselves and with the predictions. This uncertainty becomes increasingly important when multiple hazards (flooding, wave attack, erosion, etc.) are considered simultaneously. This paper focusses on establishing relationships between observed damage and multiple hazard indicators in order to make better probabilistic predictions. The concept consists of (1) determining Local Hazard Indicators (LHIs) from a hindcasted storm with use of a nearshore morphodynamic model, XBeach, and (2) coupling these LHIs and building characteristics to the observed damages. We chose a Bayesian Network approach in order to make this coupling and used the LHIs ‘Inundation depth’, ‘Flow velocity’, ‘Wave attack’, and ‘Scour depth’ to represent flooding, current, wave impacts, and erosion related hazards.The coupled hazard model was tested against four thousand damage observations from a case site at the Rockaway Peninsula, NY, that was impacted by Hurricane Sandy in late October, 2012. The model was able to accurately distinguish ‘Minor damage’ from all other outcomes 95% of the time and could distinguish areas that were affected by the storm, but not severely damaged, 68% of the time. For the most heavily damaged buildings (‘Major Damage’ and ‘Destroyed’), projections of the expected damage underestimated the observed damage. The model demonstrated that including multiple hazards doubled the prediction skill, with Log-Likelihood Ratio test (a measure of improved accuracy and reduction in uncertainty) scores between 0.02 and 0

  11. Fuzzy Cognitive Maps for Glacier Hazards Assessment: Application to Predicting the Potential for Glacier Lake Outbursts

    Science.gov (United States)

    Furfaro, R.; Kargel, J. S.; Fink, W.; Bishop, M. P.

    2010-12-01

    Glaciers and ice sheets are among the largest unstable parts of the solid Earth. Generally, glaciers are devoid of resources (other than water), are dangerous, are unstable and no infrastructure is normally built directly on their surfaces. Areas down valley from large alpine glaciers are also commonly unstable due to landslide potential of moraines, debris flows, snow avalanches, outburst floods from glacier lakes, and other dynamical alpine processes; yet there exists much development and human occupation of some disaster-prone areas. Satellite remote sensing can be extremely effective in providing cost-effective and time- critical information. Space-based imagery can be used to monitor glacier outlines and their lakes, including processes such as iceberg calving and debris accumulation, as well as changing thicknesses and flow speeds. Such images can also be used to make preliminary identifications of specific hazardous spots and allows preliminary assessment of possible modes of future disaster occurrence. Autonomous assessment of glacier conditions and their potential for hazards would present a major advance and permit systematized analysis of more data than humans can assess. This technical leap will require the design and implementation of Artificial Intelligence (AI) algorithms specifically designed to mimic glacier experts’ reasoning. Here, we introduce the theory of Fuzzy Cognitive Maps (FCM) as an AI tool for predicting and assessing natural hazards in alpine glacier environments. FCM techniques are employed to represent expert knowledge of glaciers physical processes. A cognitive model embedded in a fuzzy logic framework is constructed via the synergistic interaction between glaciologists and AI experts. To verify the effectiveness of the proposed AI methodology as applied to predicting hazards in glacier environments, we designed and implemented a FCM that addresses the challenging problem of autonomously assessing the Glacier Lake Outburst Flow

  12. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  13. Earthquake Prediction Research In Iceland, Applications For Hazard Assessments and Warnings

    Science.gov (United States)

    Stefansson, R.

    Earthquake prediction research in Iceland, applications for hazard assessments and warnings. The first multinational earthquake prediction research project in Iceland was the Eu- ropean Council encouraged SIL project of the Nordic countries, 1988-1995. The path selected for this research was to study the physics of crustal processes leading to earth- quakes. It was considered that small earthquakes, down to magnitude zero, were the most significant for this purpose, because of the detailed information which they pro- vide both in time and space. The test area for the project was the earthquake prone region of the South Iceland seismic zone (SISZ). The PRENLAB and PRENLAB-2 projects, 1996-2000 supported by the European Union were a direct continuation of the SIL project, but with a more multidisciplinary approach. PRENLAB stands for "Earthquake prediction research in a natural labo- ratory". The basic objective was to advance our understanding in general on where, when and how dangerous NH10earthquake motion might strike. Methods were devel- oped to study crustal processes and conditions, by microearthquake information, by continuous GPS, InSAR, theoretical modelling, fault mapping and paleoseismology. New algorithms were developed for short term warnings. A very useful short term warning was issued twice in the year 2000, one for a sudden start of an eruption in Volcano Hekla February 26, and the other 25 hours before a second (in a sequence of two) magnitude 6.6 (Ms) earthquake in the South Iceland seismic zone in June 21, with the correct location and approximate size. A formal short term warning, although not going to the public, was also issued before a magnitude 5 earthquake in November 1998. In the presentation it will be shortly described what these warnings were based on. A general hazard assessmnets was presented in scientific journals 10-15 years ago assessing within a few kilometers the location of the faults of the two 2000 earthquakes and suggesting

  14. Laser safety hazard assessment — A generic model

    Science.gov (United States)

    Tyrer, J. R.; Vassie, L. H.; Clarke, A. A.; Soufi, B.

    The status of laser safety amongst UK manufacturers is reported. A disturbing proportion feel the need for practical guidance on laser safety issues. Furthermore no improvements in laser safety protocols in the past 5 years are evident. Closer studies of the much larger laser user community and also the HSE Inspectorate has revealed a matrix of issues pertinent to users and their laser application. A need to support the user with practical information is demonstrated. Laser safety research activities have investigated the task of hazard assessment and a novel 'Loughborough Laser Hazard Assessment Model' has been defined and tested. The model provides a general methodology for hazard assessment and, furthermore, is the kernel of an advisory software system. The methodical approach to hazard assessment can be extended to risk assessment processes in laser and non-laser related activities.

  15. Reduced-rank hazard regression for modelling non-proportional hazards.

    Science.gov (United States)

    Perperoglou, Aris; le Cessie, Saskia; van Houwelingen, Hans C

    2006-08-30

    The Cox proportional hazards model is the most common method to analyse survival data. However, the proportional hazards assumption might not hold. The natural extension of the Cox model is to introduce time-varying effects of the covariates. For some covariates such as (surgical)treatment non-proportionality could be expected beforehand. For some other covariates the non-proportionality only becomes apparent if the follow-up is long enough. It is often observed that all covariates show similar decaying effects over time. Such behaviour could be explained by the popular (gamma-) frailty model. However, the (marginal) effects of covariates in frailty models are not easy to interpret. In this paper we propose the reduced-rank model for time-varying effects of covariates. Starting point is a Cox model with p covariates and time-varying effects modelled by q time functions (constant included), leading to a pxq structure matrix that contains the regression coefficients for all covariate by time function interactions. By reducing the rank of this structure matrix a whole range of models is introduced, from the very flexible full-rank model (identical to a Cox model with time-varying effects) to the very rigid rank one model that mimics the structure of a gamma-frailty model, but is easier to interpret. We illustrate these models with an application to ovarian cancer patients. Copyright (c) 2005 John Wiley & Sons, Ltd.

  16. Seismic hazard analysis of Tianjin area based on strong ground motion prediction

    Science.gov (United States)

    Zhao, Boming

    2010-08-01

    Taking Tianjin as an example, this paper proposed a methodology and process for evaluating near-fault strong ground motions from future earthquakes to mitigate earthquake damage for the metropolitan area and important engineering structures. The result of strong ground motion was predicted for Tianjin main faults by the hybrid method which mainly consists of 3D finite difference method and stochastic Green’s function. Simulation is performed for 3D structures of Tianjin region and characterized asperity models. The characterized asperity model describing source heterogeneity is introduced following the fault information from the project of Tianjin Active Faults and Seismic Hazard Assessment. We simulated the worst case that two earthquakes separately occur. The results indicate that the fault position, rupture process and the sedimentary deposits of the basin significantly affect amplification of the simulated ground motion. Our results also demonstrate the possibility of practical simulating wave propagation including basin induced surface waves in broad frequency-band, for seismic hazard analysis near the fault from future earthquakes in urbanized areas.

  17. Causal Mediation Analysis for the Cox Proportional Hazards Model with a Smooth Baseline Hazard Estimator.

    Science.gov (United States)

    Wang, Wei; Albert, Jeffrey M

    2017-08-01

    An important problem within the social, behavioral, and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model in this paper. We give precise definitions of the total effect, natural indirect effect, and natural direct effect in terms of the survival probability, hazard function, and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log cumulative hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson Heart Study data and conduct sensitivity analysis to assess the impact on the mediation effects inference when the no unmeasured mediator-outcome confounding assumption is violated.

  18. A Predictive Safety Management System Software Package Based on the Continuous Hazard Tracking and Failure Prediction Methodology

    Science.gov (United States)

    Quintana, Rolando

    2003-01-01

    The goal of this research was to integrate a previously validated and reliable safety model, called Continuous Hazard Tracking and Failure Prediction Methodology (CHTFPM), into a software application. This led to the development of a safety management information system (PSMIS). This means that the theory or principles of the CHTFPM were incorporated in a software package; hence, the PSMIS is referred to as CHTFPM management information system (CHTFPM MIS). The purpose of the PSMIS is to reduce the time and manpower required to perform predictive studies as well as to facilitate the handling of enormous quantities of information in this type of studies. The CHTFPM theory encompasses the philosophy of looking at the concept of safety engineering from a new perspective: from a proactive, than a reactive, viewpoint. That is, corrective measures are taken before a problem instead of after it happened. That is why the CHTFPM is a predictive safety because it foresees or anticipates accidents, system failures and unacceptable risks; therefore, corrective action can be taken in order to prevent all these unwanted issues. Consequently, safety and reliability of systems or processes can be further improved by taking proactive and timely corrective actions.

  19. A conflict model for the international hazardous waste disposal dispute.

    Science.gov (United States)

    Hu, Kaixian; Hipel, Keith W; Fang, Liping

    2009-12-15

    A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.

  20. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  1. The influence of hazard models on GIS-based regional risk assessments and mitigation policies

    Science.gov (United States)

    Bernknopf, R.L.; Rabinovici, S.J.M.; Wood, N.J.; Dinitz, L.B.

    2006-01-01

    Geographic information systems (GIS) are important tools for understanding and communicating the spatial distribution of risks associated with natural hazards in regional economies. We present a GIS-based decision support system (DSS) for assessing community vulnerability to natural hazards and evaluating potential mitigation policy outcomes. The Land Use Portfolio Modeler (LUPM) integrates earth science and socioeconomic information to predict the economic impacts of loss-reduction strategies. However, the potential use of such systems in decision making may be limited when multiple but conflicting interpretations of the hazard are available. To explore this problem, we conduct a policy comparison using the LUPM to test the sensitivity of three available assessments of earthquake-induced lateral-spread ground failure susceptibility in a coastal California community. We find that the uncertainty regarding the interpretation of the science inputs can influence the development and implementation of natural hazard management policies. Copyright ?? 2006 Inderscience Enterprises Ltd.

  2. Hazard identification by extended multilevel flow modelling with function roles

    DEFF Research Database (Denmark)

    Wu, Jing; Zhang, Laibin; Jørgensen, Sten Bay

    2014-01-01

    HAZOP studies are widely accepted in chemical and petroleum industries as the method for conducting process hazard analysis related to design, maintenance and operation of th e systems. In this paper, a HAZOP reasoning method based on function-oriented modelling, multilevel flow modelling (MFM...

  3. Space Particle Hazard Measurement and Modeling

    Science.gov (United States)

    2016-09-01

    rejection. The method was applied to data obtained by the Relativistic Electron-Proton Telescopes (REPT) on the Van Allen Probes satellites, providing new...capabilities of current instrumentation [37]. Van Allen Probe data continued to be evaluated to remove contamination and compared to models. Evaluated...AFRL/RVBX performed substantial research and development in the areas of the AE9/AP9/SPM standard radiation belt model, energetic space particle

  4. Geoinformational prognostic model of mudflows hazard and mudflows risk for the territory of Ukrainian Carpathians

    Science.gov (United States)

    Chepurna, Tetiana B.; Kuzmenko, Eduard D.; Chepurnyj, Igor V.

    2017-06-01

    The article is devoted to the geological issue of the space-time regional prognostication of mudflow hazard. The methodology of space-time prediction of mudflows hazard by creating GIS predictive model has been developed. Using GIS technologies the relevant and representative complex of significant influence of spatial and temporal factors, adjusted to use in the regional prediction of mudflows hazard, were selected. Geological, geomorphological, technological, climatic, and landscape factors have been selected as spatial mudflow factors. Spatial analysis is based on detection of a regular connection of spatial factor characteristics with spatial distribution of the mudflow sites. The function of a standard complex spatial index (SCSI) of the probability of the mudflow sites distribution has been calculated. The temporal, long-term prediction of the mudflows activity was based on the hypothesis of the regular reiteration of natural processes. Heliophysical, seismic, meteorological, and hydrogeological factors have been selected as time mudflow factors. The function of a complex index of long standing mudflow activity (CIMA) has been calculated. The prognostic geoinformational model of mudflow hazard up to 2020 year, a year of the next peak of the mudflows activity, has been created. Mudflow risks have been counted and carogram of mudflow risk assessment within the limits of administrative-territorial units has been built for 2020 year.

  5. Risk prediction of Critical Infrastructures against extreme natural hazards: local and regional scale analysis

    Science.gov (United States)

    Rosato, Vittorio; Hounjet, Micheline; Burzel, Andreas; Di Pietro, Antonio; Tofani, Alberto; Pollino, Maurizio; Giovinazzi, Sonia

    2016-04-01

    Natural hazard events can induce severe impacts on the built environment; they can hit wide and densely populated areas, where there is a large number of (inter)dependent technological systems whose damages could cause the failure or malfunctioning of further different services, spreading the impacts on wider geographical areas. The EU project CIPRNet (Critical Infrastructures Preparedness and Resilience Research Network) is realizing an unprecedented Decision Support System (DSS) which enables to operationally perform risk prediction on Critical Infrastructures (CI) by predicting the occurrence of natural events (from long term weather to short nowcast predictions, correlating intrinsic vulnerabilities of CI elements with the different events' manifestation strengths, and analysing the resulting Damage Scenario. The Damage Scenario is then transformed into an Impact Scenario, where punctual CI element damages are transformed into micro (local area) or meso (regional) scale Services Outages. At the smaller scale, the DSS simulates detailed city models (where CI dependencies are explicitly accounted for) that are of important input for crisis management organizations whereas, at the regional scale by using approximate System-of-Systems model describing systemic interactions, the focus is on raising awareness. The DSS has allowed to develop a novel simulation framework for predicting earthquakes shake maps originating from a given seismic event, considering the shock wave propagation in inhomogeneous media and the subsequent produced damages by estimating building vulnerabilities on the basis of a phenomenological model [1, 2]. Moreover, in presence of areas containing river basins, when abundant precipitations are expected, the DSS solves the hydrodynamic 1D/2D models of the river basins for predicting the flux runoff and the corresponding flood dynamics. This calculation allows the estimation of the Damage Scenario and triggers the evaluation of the Impact Scenario

  6. SAS macros for point and interval estimation of area under the receiver operating characteristic curve for non-proportional and proportional hazards Weibull models.

    Science.gov (United States)

    Mannan, Haider; Stevenson, Chris

    2010-08-01

    For prediction of risk of cardiovascular end points using survival models the proportional hazards assumption is often not met. Thus, non-proportional hazards models are more appropriate for developing risk prediction equations in such situations. However, computer program for evaluating the prediction performance of such models has been rarely addressed. We therefore developed SAS macro programs for evaluating the discriminative ability of a non-proportional hazards Weibull model developed by Anderson (1991) and that of a proportional hazards Weibull model using the area under receiver operating characteristic (ROC) curve. Two SAS macro programs for non-proportional hazards Weibull model using Proc NLIN and Proc NLP respectively and model validation using area under ROC curve (with its confidence limits) were written with SAS IML language. A similar SAS macro for proportional hazards Weibull model was also written. The computer program was applied to data on coronary heart disease incidence for a Framingham population cohort. The five risk factors considered were current smoking, age, blood pressure, cholesterol and obesity. The predictive ability of the non-proportional hazard Weibull model was slightly higher than that of its proportional hazard counterpart. An advantage of SAS Proc NLP in terms of the example provided here is that it provides significance level for the parameter estimates whereas Proc NLIN does not. The program is very useful for evaluating the predictive performance of non-proportional and proportional hazards Weibull models.

  7. TsuPy: Computational robustness in Tsunami hazard modelling

    Science.gov (United States)

    Schäfer, Andreas M.; Wenzel, Friedemann

    2017-05-01

    Modelling wave propagation is the most essential part in assessing the risk and hazard of tsunami and storm surge events. For the computational assessment of the variability of such events, many simulations are necessary. Even today, most of these simulations are generally run on supercomputers due to the large amount of computations necessary. In this study, a simulation framework, named TsuPy, is introduced to quickly compute tsunami events on a personal computer. It uses the parallelized power of GPUs to accelerate computation. The system is tailored to the application of robust tsunami hazard and risk modelling. It links up to geophysical models to simulate event sources. The system is tested and validated using various benchmarks and real-world case studies. In addition, the robustness criterion is assessed based on a sensitivity study comparing the error impact of various model elements e.g. of topo-bathymetric resolution, knowledge of Manning friction parameters and the knowledge of the tsunami source itself. This sensitivity study is tested on inundation modelling of the 2011 Tohoku tsunami, showing that the major contributor to model uncertainty is in fact the representation of earthquake slip as part of the tsunami source profile. TsuPy provides a fast and reliable tool to quickly assess ocean hazards from tsunamis and thus builds the foundation for a globally uniform hazard and risk assessment for tsunamis.

  8. Components of behavioural impulsivity and automatic cue approach predict unique variance in hazardous drinking.

    Science.gov (United States)

    Christiansen, Paul; Cole, Jon C; Goudie, Andrew J; Field, Matt

    2012-01-01

    Hazardous drinking is associated with both increased impulsivity and automatic approach tendencies elicited by alcohol-related cues. However, impulsivity is a multi-factorial construct, and it is currently unclear if all components of impulsivity are associated with heavy drinking. Furthermore, emerging evidence suggests that the relationships between hazardous drinking and automatic alcohol cognitions may be moderated by individual differences in impulsivity. The aim of this study was to investigate the independence of measures of impulsivity and their association with hazardous drinking, and to examine if the relationship between hazardous drinking and automatic alcohol approach tendencies would be moderated by individual differences in impulsivity. Ninety-seven social drinkers (65 female) completed questionnaire measures of trait impulsivity, alcohol consumption and hazardous drinking. Participants also completed computerised measures of automatic alcohol approach tendencies (stimulus-response compatibility (SRC) task), and two behavioural measures of impulsivity (Go/No-go and delay discounting tasks). Principal component analysis revealed that the two measures of behavioural impulsivity were distinct from each other and from self-reported trait impulsivity, although self-reported non-planning impulsivity loaded on to two factors (trait impulsivity and delay discounting). Furthermore, all measures of impulsivity predicted unique variance in hazardous drinking as did automatic alcohol approach tendencies, although the latter relationship was not moderated by impulsivity. These results indicate that multiple components of impulsivity and automatic alcohol approach tendencies explain unique variance in hazardous drinking.

  9. Uncertainties in modeling hazardous gas releases for emergency response

    OpenAIRE

    Kathrin Baumann-Stanzer; Sirma Stenzel

    2011-01-01

    In case of an accidental release of toxic gases the emergency responders need fast information about the affected area and the maximum impact. Hazard distances calculated with the models MET, ALOHA, BREEZE, TRACE and SAMS for scenarios with chlorine, ammoniac and butane releases are compared in this study. The variations of the model results are measures for uncertainties in source estimation and dispersion calculation. Model runs for different wind speeds, atmospheric stability and roughness...

  10. Review of methods for modelling forest fire risk and hazard

    African Journals Online (AJOL)

    user

    need to identify a method or combination of methods to help model forest fire risk and hazard to enable the sustainability of the natural resources. ..... fire behaviour through variations in the amount of solar radiation andwind thatdifferentaspects ... drying both the soil and the vegetation. Slope. Slope is an extremely important ...

  11. Identification, prediction, and mitigation of sinkhole hazards in evaporite karst areas

    Science.gov (United States)

    Gutierrez, F.; Cooper, A.H.; Johnson, K.S.

    2008-01-01

    Sinkholes usually have a higher probability of occurrence and a greater genetic diversity in evaporite terrains than in carbonate karst areas. This is because evaporites have a higher solubility and, commonly, a lower mechanical strength. Subsidence damage resulting from evaporite dissolution generates substantial losses throughout the world, but the causes are only well understood in a few areas. To deal with these hazards, a phased approach is needed for sinkhole identification, investigation, prediction, and mitigation. Identification techniques include field surveys and geomorphological mapping combined with accounts from local people and historical sources. Detailed sinkhole maps can be constructed from sequential historical maps, recent topographical maps, and digital elevation models (DEMs) complemented with building-damage surveying, remote sensing, and high-resolution geodetic surveys. On a more detailed level, information from exposed paleosubsidence features (paleokarst), speleological explorations, geophysical investigations, trenching, dating techniques, and boreholes may help in investigating dissolution and subsidence features. Information on the hydrogeological pathways including caves, springs, and swallow holes are particularly important especially when corroborated by tracer tests. These diverse data sources make a valuable database-the karst inventory. From this dataset, sinkhole susceptibility zonations (relative probability) may be produced based on the spatial distribution of the features and good knowledge of the local geology. Sinkhole distribution can be investigated by spatial distribution analysis techniques including studies of preferential elongation, alignment, and nearest neighbor analysis. More objective susceptibility models may be obtained by analyzing the statistical relationships between the known sinkholes and the conditioning factors. Chronological information on sinkhole formation is required to estimate the probability of

  12. Development of Predictive Relationships for Flood Hazard Assessments in Ungaged Basins

    Science.gov (United States)

    2016-02-01

    Hydraulics Engineering Technical Note (CHETN) is to develop predictive relationships for characterizing the flood hazard in ungaged basins...intervals to an hour. Temporal downscaling of 24-hour accumulated return period of rainfall intensity to an hourly rate was based on the Soil Conservation ...return period. (Source: METCON, Meteorological Connections LLC.) SOIL DATA: The Food and Agriculture Organization (FAO) United Nations world soil

  13. Current Methods of Natural Hazards Communication used within Catastrophe Modelling

    Science.gov (United States)

    Dawber, C.; Latchman, S.

    2012-04-01

    In the field of catastrophe modelling, natural hazards need to be explained every day to (re)insurance professionals so that they may understand estimates of the loss potential of their portfolio. The effective communication of natural hazards to city professionals requires different strategies to be taken depending on the audience, their prior knowledge and respective backgrounds. It is best to have at least three sets of tools in your arsenal for a specific topic, 1) an illustration/animation, 2) a mathematical formula and 3) a real world case study example. This multi-faceted approach will be effective for those that learn best by pictorial means, mathematical means or anecdotal means. To show this we will use a set of real examples employed in the insurance industry of how different aspects of natural hazards and the uncertainty around them are explained to city professionals. For example, explaining the different modules within a catastrophe model such as the hazard, vulnerability and loss modules. We highlight how recent technology such as 3d plots, video recording and Google Earth maps, when used properly can help explain concepts quickly and easily. Finally we also examine the pitfalls of using overly-complicated visualisations and in general how counter-intuitive deductions may be made.

  14. Penalized estimation for proportional hazards models with current status data.

    Science.gov (United States)

    Lu, Minggen; Li, Chin-Shang

    2017-12-30

    We provide a simple and practical, yet flexible, penalized estimation method for a Cox proportional hazards model with current status data. We approximate the baseline cumulative hazard function by monotone B-splines and use a hybrid approach based on the Fisher-scoring algorithm and the isotonic regression to compute the penalized estimates. We show that the penalized estimator of the nonparametric component achieves the optimal rate of convergence under some smooth conditions and that the estimators of the regression parameters are asymptotically normal and efficient. Moreover, a simple variance estimation method is considered for inference on the regression parameters. We perform 2 extensive Monte Carlo studies to evaluate the finite-sample performance of the penalized approach and compare it with the 3 competing R packages: C1.coxph, intcox, and ICsurv. A goodness-of-fit test and model diagnostics are also discussed. The methodology is illustrated with 2 real applications. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Measurement, geospatial, and mechanistic models of public health hazard vulnerability and jurisdictional risk.

    Science.gov (United States)

    Testa, Marcia A; Pettigrew, Mary L; Savoia, Elena

    2014-01-01

    County and state health departments are increasingly conducting hazard vulnerability and jurisdictional risk (HVJR) assessments for public health emergency preparedness and mitigation planning and evaluation to improve the public health disaster response; however, integration and adoption of these assessments into practice are still relatively rare. While the quantitative methods associated with complex analytic and measurement methods, causal inference, and decision theory are common in public health research, they have not been widely used in public health preparedness and mitigation planning. To address this gap, the Harvard School of Public Health PERLC's goal was to develop measurement, geospatial, and mechanistic models to aid public health practitioners in understanding the complexity of HVJR assessment and to determine the feasibility of using these methods for dynamic and predictive HVJR analyses. We used systematic reviews, causal inference theory, structural equation modeling (SEM), and multivariate statistical methods to develop the conceptual and mechanistic HVJR models. Geospatial mapping was used to inform the hypothetical mechanistic model by visually examining the variability and patterns associated with county-level demographic, social, economic, hazards, and resource data. A simulation algorithm was developed for testing the feasibility of using SEM estimation. The conceptual model identified the predictive latent variables used in public health HVJR tools (hazard, vulnerability, and resilience), the outcomes (human, physical, and economic losses), and the corresponding measurement subcomponents. This model was translated into a hypothetical mechanistic model to explore and evaluate causal and measurement pathways. To test the feasibility of SEM estimation, the mechanistic model path diagram was translated into linear equations and solved simultaneously using simulated data representing 192 counties. Measurement, geospatial, and mechanistic

  16. Defaultable Game Options in a Hazard Process Model

    Directory of Open Access Journals (Sweden)

    Tomasz R. Bielecki

    2009-01-01

    Full Text Available The valuation and hedging of defaultable game options is studied in a hazard process model of credit risk. A convenient pricing formula with respect to a reference filteration is derived. A connection of arbitrage prices with a suitable notion of hedging is obtained. The main result shows that the arbitrage prices are the minimal superhedging prices with sigma martingale cost under a risk neutral measure.

  17. Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent

    Directory of Open Access Journals (Sweden)

    Noah Simon

    2011-03-01

    Full Text Available We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of l1 and l2 penalties (elastic net. Our algorithm fits via cyclical coordinate descent, and employs warm starts to find a solution along a regularization path. We demonstrate the efficacy of our algorithm on real and simulated data sets, and find considerable speedup between our algorithm and competing methods.

  18. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used......In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation......, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same...

  19. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production...... the performance of HIRLAM in particular with respect to wind predictions. To estimate the performance of the model two spatial resolutions (0,5 Deg. and 0.2 Deg.) and different sets of HIRLAM variables were used to predict wind speed and energy production. The predictions of energy production for the wind farms...

  20. Global Partial Likelihood for Nonparametric Proportional Hazards Models.

    Science.gov (United States)

    Chen, Kani; Guo, Shaojun; Sun, Liuquan; Wang, Jane-Ling

    2010-01-01

    As an alternative to the local partial likelihood method of Tibshirani and Hastie and Fan, Gijbels, and King, a global partial likelihood method is proposed to estimate the covariate effect in a nonparametric proportional hazards model, λ(t|x) = exp{ψ(x)}λ(0)(t). The estimator, ψ̂(x), reduces to the Cox partial likelihood estimator if the covariate is discrete. The estimator is shown to be consistent and semiparametrically efficient for linear functionals of ψ(x). Moreover, Breslow-type estimation of the cumulative baseline hazard function, using the proposed estimator ψ̂(x), is proved to be efficient. The asymptotic bias and variance are derived under regularity conditions. Computation of the estimator involves an iterative but simple algorithm. Extensive simulation studies provide evidence supporting the theory. The method is illustrated with the Stanford heart transplant data set. The proposed global approach is also extended to a partially linear proportional hazards model and found to provide efficient estimation of the slope parameter. This article has the supplementary materials online.

  1. COMPARISON of FUZZY-BASED MODELS in LANDSLIDE HAZARD MAPPING

    Directory of Open Access Journals (Sweden)

    N. Mijani

    2017-09-01

    Full Text Available Landslide is one of the main geomorphic processes which effects on the development of prospect in mountainous areas and causes disastrous accidents. Landslide is an event which has different uncertain criteria such as altitude, slope, aspect, land use, vegetation density, precipitation, distance from the river and distance from the road network. This research aims to compare and evaluate different fuzzy-based models including Fuzzy Analytic Hierarchy Process (Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR. The main contribution of this paper reveals to the comprehensive criteria causing landslide hazard considering their uncertainties and comparison of different fuzzy-based models. The quantify of evaluation process are calculated by Density Ratio (DR and Quality Sum (QS. The proposed methodology implemented in Sari, one of the city of Iran which has faced multiple landslide accidents in recent years due to the particular environmental conditions. The achieved results of accuracy assessment based on the quantifier strated that Fuzzy-AHP model has higher accuracy compared to other two models in landslide hazard zonation. Accuracy of zoning obtained from Fuzzy-AHP model is respectively 0.92 and 0.45 based on method Precision (P and QS indicators. Based on obtained landslide hazard maps, Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR respectively cover 13, 26 and 35 percent of the study area with a very high risk level. Based on these findings, fuzzy-AHP model has been selected as the most appropriate method of zoning landslide in the city of Sari and the Fuzzy-gamma method with a minor difference is in the second order.

  2. Comparison of Fuzzy-Based Models in Landslide Hazard Mapping

    Science.gov (United States)

    Mijani, N.; Neysani Samani, N.

    2017-09-01

    Landslide is one of the main geomorphic processes which effects on the development of prospect in mountainous areas and causes disastrous accidents. Landslide is an event which has different uncertain criteria such as altitude, slope, aspect, land use, vegetation density, precipitation, distance from the river and distance from the road network. This research aims to compare and evaluate different fuzzy-based models including Fuzzy Analytic Hierarchy Process (Fuzzy-AHP), Fuzzy Gamma and Fuzzy-OR. The main contribution of this paper reveals to the comprehensive criteria causing landslide hazard considering their uncertainties and comparison of different fuzzy-based models. The quantify of evaluation process are calculated by Density Ratio (DR) and Quality Sum (QS). The proposed methodology implemented in Sari, one of the city of Iran which has faced multiple landslide accidents in recent years due to the particular environmental conditions. The achieved results of accuracy assessment based on the quantifier strated that Fuzzy-AHP model has higher accuracy compared to other two models in landslide hazard zonation. Accuracy of zoning obtained from Fuzzy-AHP model is respectively 0.92 and 0.45 based on method Precision (P) and QS indicators. Based on obtained landslide hazard maps, Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR respectively cover 13, 26 and 35 percent of the study area with a very high risk level. Based on these findings, fuzzy-AHP model has been selected as the most appropriate method of zoning landslide in the city of Sari and the Fuzzy-gamma method with a minor difference is in the second order.

  3. Empirical Bayes estimation for additive hazards regression models.

    Science.gov (United States)

    Sinha, Debajyoti; McHenry, M Brent; Lipsitz, Stuart R; Ghosh, Malay

    2009-09-01

    We develop a novel empirical Bayesian framework for the semiparametric additive hazards regression model. The integrated likelihood, obtained by integration over the unknown prior of the nonparametric baseline cumulative hazard, can be maximized using standard statistical software. Unlike the corresponding full Bayes method, our empirical Bayes estimators of regression parameters, survival curves and their corresponding standard errors have easily computed closed-form expressions and require no elicitation of hyperparameters of the prior. The method guarantees a monotone estimator of the survival function and accommodates time-varying regression coefficients and covariates. To facilitate frequentist-type inference based on large-sample approximation, we present the asymptotic properties of the semiparametric empirical Bayes estimates. We illustrate the implementation and advantages of our methodology with a reanalysis of a survival dataset and a simulation study.

  4. Development of hazard-compatible building fragility and vulnerability models

    Science.gov (United States)

    Karaca, E.; Luco, N.

    2008-01-01

    We present a methodology for transforming the structural and non-structural fragility functions in HAZUS into a format that is compatible with conventional seismic hazard analysis information. The methodology makes use of the building capacity (or pushover) curves and related building parameters provided in HAZUS. Instead of the capacity spectrum method applied in HAZUS, building response is estimated by inelastic response history analysis of corresponding single-degree-of-freedom systems under a large number of earthquake records. Statistics of the building response are used with the damage state definitions from HAZUS to derive fragility models conditioned on spectral acceleration values. Using the developed fragility models for structural and nonstructural building components, with corresponding damage state loss ratios from HAZUS, we also derive building vulnerability models relating spectral acceleration to repair costs. Whereas in HAZUS the structural and nonstructural damage states are treated as if they are independent, our vulnerability models are derived assuming "complete" nonstructural damage whenever the structural damage state is complete. We show the effects of considering this dependence on the final vulnerability models. The use of spectral acceleration (at selected vibration periods) as the ground motion intensity parameter, coupled with the careful treatment of uncertainty, makes the new fragility and vulnerability models compatible with conventional seismic hazard curves and hence useful for extensions to probabilistic damage and loss assessment.

  5. Modelling, controlling, predicting blackouts

    CERN Document Server

    Wang, Chengwei; Baptista, Murilo S

    2016-01-01

    The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...

  6. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  8. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    restrictive layer, CaCO3 content, organic matter content/ A Horizon thickness, erosion potential (k factor, Erosion/Deposition Model), available water ...ruins. The layers are interpreted along with other topographical features, such as distance to water , to make determinations on potential for CRs. This...huge areas of the Chesapeake Bay or Atlantic Ranges, which are water ranges with the potential for historic and prehistoric sites. 2.2.9 Edwards

  9. Application of decision tree model for the ground subsidence hazard mapping near abandoned underground coal mines.

    Science.gov (United States)

    Lee, Saro; Park, Inhye

    2013-09-30

    Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. Copyright © 2013. Published by Elsevier Ltd.

  10. Kernel-Based Visual Hazard Comparison (kbVHC): a Simulation-Free Diagnostic for Parametric Repeated Time-to-Event Models.

    Science.gov (United States)

    Goulooze, Sebastiaan C; Välitalo, Pyry A J; Knibbe, Catherijne A J; Krekels, Elke H J

    2017-11-27

    Repeated time-to-event (RTTE) models are the preferred method to characterize the repeated occurrence of clinical events. Commonly used diagnostics for parametric RTTE models require representative simulations, which may be difficult to generate in situations with dose titration or informative dropout. Here, we present a novel simulation-free diagnostic tool for parametric RTTE models; the kernel-based visual hazard comparison (kbVHC). The kbVHC aims to evaluate whether the mean predicted hazard rate of a parametric RTTE model is an adequate approximation of the true hazard rate. Because the true hazard rate cannot be directly observed, the predicted hazard is compared to a non-parametric kernel estimator of the hazard rate. With the degree of smoothing of the kernel estimator being determined by its bandwidth, the local kernel bandwidth is set to the lowest value that results in a bootstrap coefficient of variation (CV) of the hazard rate that is equal to or lower than a user-defined target value (CV target ). The kbVHC was evaluated in simulated scenarios with different number of subjects, hazard rates, CV target values, and hazard models (Weibull, Gompertz, and circadian-varying hazard). The kbVHC was able to distinguish between Weibull and Gompertz hazard models, even when the hazard rate was relatively low (< 2 events per subject). Additionally, it was more sensitive than the Kaplan-Meier VPC to detect circadian variation of the hazard rate. An additional useful feature of the kernel estimator is that it can be generated prior to model development to explore the shape of the hazard rate function.

  11. Flood hazard maps from SAR data and global hydrodynamic models

    Science.gov (United States)

    Giustarini, Laura; Chini, Marci; Hostache, Renaud; Matgen, Patrick; Pappenberger, Florian; Bally, Phillippe

    2015-04-01

    With flood consequences likely to amplify because of growing population and ongoing accumulation of assets in flood-prone areas, global flood hazard and risk maps are greatly needed for improving flood preparedness at large scale. At the same time, with the rapidly growing archives of SAR images of floods, there is a high potential of making use of these images for global and regional flood management. In this framework, an original method is presented to integrate global flood inundation modeling and microwave remote sensing. It takes advantage of the combination of the time and space continuity of a global inundation model with the high spatial resolution of satellite observations. The availability of model simulations over a long time period offers the opportunity to estimate flood non-exceedance probabilities in a robust way. The probabilities can later be attributed to historical satellite observations. SAR-derived flood extent maps with their associated non-exceedance probabilities are then combined to generate flood hazard maps with a spatial resolution equal to that of the satellite images, which is most of the time higher than that of a global inundation model. The method can be applied to any area of interest in the world, provided that a sufficient number of relevant remote sensing images are available. We applied the method on the Severn River (UK) and on the Zambezi River (Mozambique), where large archives of Envisat flood images can be exploited. The global ECMWF flood inundation model is considered for computing the statistics of extreme events. A comparison with flood hazard maps estimated with in situ measured discharge is carried out. An additional analysis has been performed on the Severn River, using high resolution SAR data from the COSMO-SkyMed SAR constellation, acquired for a single flood event (one flood map per day between 27/11/2012 and 4/12/2012). The results showed that it is vital to observe the peak of the flood. However, a single

  12. Uncertainties in modeling hazardous gas releases for emergency response

    Energy Technology Data Exchange (ETDEWEB)

    Baumann-Stanzer, Kathrin; Stenzel, Sirma [Zentralanstalt fuer Meteorologie und Geodynamik, Vienna (Austria)

    2011-02-15

    In case of an accidental release of toxic gases the emergency responders need fast information about the affected area and the maximum impact. Hazard distances calculated with the models MET, ALOHA, BREEZE, TRACE and SAMS for scenarios with chlorine, ammoniac and butane releases are compared in this study. The variations of the model results are measures for uncertainties in source estimation and dispersion calculation. Model runs for different wind speeds, atmospheric stability and roughness lengths indicate the model sensitivity to these input parameters. In-situ measurements at two urban near-traffic sites are compared to results of the Integrated Nowcasting through Comprehensive Analysis (INCA) in order to quantify uncertainties in the meteorological input. The hazard zone estimates from the models vary up to a factor of 4 due to different input requirements as well as due to different internal model assumptions. None of the models is found to be 'more conservative' than the others in all scenarios. INCA wind-speeds are correlated to in-situ observations at two urban sites in Vienna with a factor of 0.89. The standard deviations of the normal error distribution are 0.8 ms{sup -1} in wind speed, on the scale of 50 degrees in wind direction, up to 4 C in air temperature and up to 10 % in relative humidity. The observed air temperature and humidity are well reproduced by INCA with correlation coefficients of 0.96 to 0.99. INCA is therefore found to give a good representation of the local meteorological conditions. Besides of real-time data, the INCA-short range forecast for the following hours may support the action planning of the first responders. (orig.)

  13. Uncertainties in modeling hazardous gas releases for emergency response

    Directory of Open Access Journals (Sweden)

    Kathrin Baumann-Stanzer

    2011-02-01

    Full Text Available In case of an accidental release of toxic gases the emergency responders need fast information about the affected area and the maximum impact. Hazard distances calculated with the models MET, ALOHA, BREEZE, TRACE and SAMS for scenarios with chlorine, ammoniac and butane releases are compared in this study. The variations of the model results are measures for uncertainties in source estimation and dispersion calculation. Model runs for different wind speeds, atmospheric stability and roughness lengths indicate the model sensitivity to these input parameters. In-situ measurements at two urban near-traffic sites are compared to results of the Integrated Nowcasting through Comprehensive Analysis (INCA in order to quantify uncertainties in the meteorological input. The hazard zone estimates from the models vary up to a factor of 4 due to different input requirements as well as due to different internal model assumptions. None of the models is found to be 'more conservative' than the others in all scenarios. INCA wind-speeds are correlated to in-situ observations at two urban sites in Vienna with a factor of 0.89. The standard deviations of the normal error distribution are 0.8 ms-1 in wind speed, on the scale of 50 degrees in wind direction, up to 4°C in air temperature and up to 10 % in relative humidity. The observed air temperature and humidity are well reproduced by INCA with correlation coefficients of 0.96 to 0.99. INCA is therefore found to give a good representation of the local meteorological conditions. Besides of real-time data, the INCA-short range forecast for the following hours may support the action planning of the first responders.

  14. The role of hazardous drinking reductions in predicting depression and anxiety symptom improvement among psychiatry patients: A longitudinal study.

    Science.gov (United States)

    Bahorik, Amber L; Leibowitz, Amy; Sterling, Stacy A; Travis, Adam; Weisner, Constance; Satre, Derek D

    2016-12-01

    Co-occurrence of depression, anxiety, and hazardous drinking is high in clinical samples. Hazardous drinking can worsen depression and anxiety symptoms (and vice versa), yet less is known about whether reductions in hazardous drinking improve symptom outcomes. Three hundred and seven psychiatry outpatients were interviewed (baseline, 3-, 6-months) for hazardous drinking (drinking over recommended daily limits), depression (PHQ-9), and anxiety (GAD-7) as part of a hazardous drinking intervention trial. Longitudinal growth models tested associations between hazardous drinking and symptoms (and reciprocal effects between symptoms and hazardous drinking), adjusting for treatment effects. At baseline, participants had moderate anxiety (M=10.81; SD=10.82) and depressive symptoms (M=13.91; SD=5.58); 60.0% consumed alcohol at hazardous drinking levels. Over 6-months, participants' anxiety (B=-3.03, panxiety (B=0.09, p=.005) and depressive symptom (B=0.10, p=.004) improvement; reductions in hazardous drinking led to faster anxiety (B=-0.09, p=.010) and depressive (B=-0.10, p=.015) symptom improvement. Neither anxiety (B=0.07, p=.066) nor depressive (B=0.05, p=.071) symptoms were associated with hazardous drinking outcomes. Participants were psychiatry outpatients, limiting generalizability. Reducing hazardous drinking can improve depression and anxiety symptoms but continued hazardous use slows recovery for psychiatry patients. Hazardous drinking-focused interventions may be helpful in promoting symptom improvement in clinical populations. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations

    Science.gov (United States)

    Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang

    2017-09-01

    Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.

  16. Modelling of a spread of hazardous substances in a Floreon+ system

    Science.gov (United States)

    Ronovsky, Ales; Brzobohaty, Tomas; Kuchar, Stepan; Vojtek, David

    2017-07-01

    This paper is focused on a module of an automatized numerical modelling of a spread of hazardous substances developed for the Floreon+ system on demand of the Fire Brigade of Moravian-Silesian. The main purpose of the module is to provide more accurate prediction for smog situations that are frequent problems in the region. It can be operated by non-scientific user through the Floreon+ client and can be used as a short term prediction model of an evolution of concentrations of dangerous substances (SO2, PMx) from stable sources, such as heavy industry factories, local furnaces or highways or as fast prediction of spread of hazardous substances in case of crash of mobile source of contamination (transport of dangerous substances) or in case of a leakage in a local chemical factory. The process of automatic gathering of atmospheric data, connection of Floreon+ system with an HPC infrastructure necessary for computing of such an advantageous model and the model itself are described bellow.

  17. Development of hydrogeological modelling approaches for assessment of consequences of hazardous accidents at nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A. [St. Petersburg Mining Inst. (Russian Federation)

    1994-07-01

    This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs.

  18. Flexible hazard regression modeling for medical cost data.

    Science.gov (United States)

    Jain, Arvind K; Strawderman, Robert L

    2002-03-01

    The modeling of lifetime (i.e. cumulative) medical cost data in the presence of censored follow-up is complicated by induced informative censoring, rendering standard survival analysis tools invalid. With few exceptions, recently proposed nonparametric estimators for such data do not extend easily to handle covariate information. We propose to model the hazard function for lifetime cost endpoints using an adaptation of the HARE methodology (Kooperberg, Stone, and Truong, Journal of the American Statistical Association, 1995, 90, 78-94). Linear splines and their tensor products are used to adaptively build a model that incorporates covariates and covariate-by-cost interactions without restrictive parametric assumptions. The informative censoring problem is handled using inverse probability of censoring weighted estimating equations. The proposed method is illustrated using simulation and also with data on the cost of dialysis for patients with end-stage renal disease.

  19. Predictive modelling of football injuries

    OpenAIRE

    Kampakis, S.

    2016-01-01

    The goal of this thesis is to investigate the potential of predictive modelling for football injuries. This work was conducted in close collaboration with Tottenham Hotspurs FC (THFC), the PGA European tour and the participation of Wolverhampton Wanderers (WW). Three investigations were conducted: 1. Predicting the recovery time of football injuries using the UEFA injury recordings: The UEFA recordings is a common standard for recording injuries in professional football. For...

  20. Predictive modelling of football injuries

    OpenAIRE

    Kampakis, Stylianos

    2016-01-01

    The goal of this thesis is to investigate the potential of predictive modelling for football injuries. This work was conducted in close collaboration with Tottenham Hotspurs FC (THFC), the PGA European tour and the participation of Wolverhampton Wanderers (WW). Three investigations were conducted: 1. Predicting the recovery time of football injuries using the UEFA injury recordings: The UEFA recordings is a common standard for recording injuries in professional football. For this investigatio...

  1. Opinion: The use of natural hazard modeling for decision making under uncertainty

    Science.gov (United States)

    David E. Calkin; Mike Mentis

    2015-01-01

    Decision making to mitigate the effects of natural hazards is a complex undertaking fraught with uncertainty. Models to describe risks associated with natural hazards have proliferated in recent years. Concurrently, there is a growing body of work focused on developing best practices for natural hazard modeling and to create structured evaluation criteria for complex...

  2. Modeling emergency evacuation for major hazard industrial sites

    Energy Technology Data Exchange (ETDEWEB)

    Georgiadou, Paraskevi S. [Hellenic Institute for Occupational Health and Safety, Liossion 143 and Theirsiou 6, Athens 104 45 (Greece)]. E-mail: pgeor@central.ntua.gr; Papazoglou, Ioannis A. [Systems Reliability and Industrial Safety Laboratory, National Center of Scientific Research ' Demokritos' , Agia Paraskevi, Athens 153 10 (Greece)]. E-mail: yannisp@ipta.demokritos.gr; Kiranoudis, Chris T. [School of Chemical Engineering, National Technical University of Athens, Zografou Campus, Athens 157 80 (Greece)]. E-mail: kyr@chemeng.ntua.gr; Markatos, Nikolaos C. [School of Chemical Engineering, National Technical University of Athens, Zografou Campus, Athens 157 80 (Greece)]. E-mail: n.markatos@ntua.gr

    2007-10-15

    A model providing the temporal and spatial distribution of the population under evacuation around a major hazard facility is developed. A discrete state stochastic Markov process simulates the movement of the evacuees. The area around the hazardous facility is divided into nodes connected among themselves with links representing the road system of the area. Transition from node-to-node is simulated as a random process where the probability of transition depends on the dynamically changed states of the destination and origin nodes and on the link between them. Solution of the Markov process provides the expected distribution of the evacuees in the nodes of the area as a function of time. A Monte Carlo solution of the model provides in addition a sample of actual trajectories of the evacuees. This information coupled with an accident analysis which provides the spatial and temporal distribution of the extreme phenomenon following an accident, determines a sample of the actual doses received by the evacuees. Both the average dose and the actual distribution of doses are then used as measures in evaluating alternative emergency response strategies. It is shown that in some cases the estimation of the health consequences by the average dose might be either too conservative or too non-conservative relative to the one corresponding to the distribution of the received dose and hence not a suitable measure to evaluate alternative evacuation strategies.

  3. Proportional Hazards Model with Covariate Measurement Error and Instrumental Variables.

    Science.gov (United States)

    Song, Xiao; Wang, Ching-Yun

    2014-12-01

    In biomedical studies, covariates with measurement error may occur in survival data. Existing approaches mostly require certain replications on the error-contaminated covariates, which may not be available in the data. In this paper, we develop a simple nonparametric correction approach for estimation of the regression parameters in the proportional hazards model using a subset of the sample where instrumental variables are observed. The instrumental variables are related to the covariates through a general nonparametric model, and no distributional assumptions are placed on the error and the underlying true covariates. We further propose a novel generalized methods of moments nonparametric correction estimator to improve the efficiency over the simple correction approach. The efficiency gain can be substantial when the calibration subsample is small compared to the whole sample. The estimators are shown to be consistent and asymptotically normal. Performance of the estimators is evaluated via simulation studies and by an application to data from an HIV clinical trial. Estimation of the baseline hazard function is not addressed.

  4. A contrail cirrus prediction model

    Directory of Open Access Journals (Sweden)

    U. Schumann

    2012-05-01

    Full Text Available A new model to simulate and predict the properties of a large ensemble of contrails as a function of given air traffic and meteorology is described. The model is designed for approximate prediction of contrail cirrus cover and analysis of contrail climate impact, e.g. within aviation system optimization processes. The model simulates the full contrail life-cycle. Contrail segments form between waypoints of individual aircraft tracks in sufficiently cold and humid air masses. The initial contrail properties depend on the aircraft. The advection and evolution of the contrails is followed with a Lagrangian Gaussian plume model. Mixing and bulk cloud processes are treated quasi analytically or with an effective numerical scheme. Contrails disappear when the bulk ice content is sublimating or precipitating. The model has been implemented in a "Contrail Cirrus Prediction Tool" (CoCiP. This paper describes the model assumptions, the equations for individual contrails, and the analysis-method for contrail-cirrus cover derived from the optical depth of the ensemble of contrails and background cirrus. The model has been applied for a case study and compared to the results of other models and in-situ contrail measurements. The simple model reproduces a considerable part of observed contrail properties. Mid-aged contrails provide the largest contributions to the product of optical depth and contrail width, important for climate impact.

  5. KarstALEA - a scientifically based method to predict karst-related hazards in underground constructions

    Science.gov (United States)

    Schmassmann, S.; Filliponi, M.; Jeannin, P.-Y.; Parriaux, A.; Malard, A.; Vouillamoz, J.

    2012-04-01

    , discharge and water pressure within a karst conduit and their variability, amount and characteristics of sediments within the karst conduits (KarstALEA zones). Although the KarstALEA method does not allow to predict the real geometry of the karst conduits (e.g. their exact position and geometry), it significantly improves the prediction of the distribution of karst conduits and their characterisation (size, orientation, presence of water, sediment-fillings). Back analysis of existing tunnels and prediction for new tunnels showed that the KarstALEA is an adequate and efficient method to predict karst-related hazards and is applicable in different contexts.

  6. VHub - Cyberinfrastructure for volcano eruption and hazards modeling and simulation

    Science.gov (United States)

    Valentine, G. A.; Jones, M. D.; Bursik, M. I.; Calder, E. S.; Gallo, S. M.; Connor, C.; Carn, S. A.; Rose, W. I.; Moore-Russo, D. A.; Renschler, C. S.; Pitman, B.; Sheridan, M. F.

    2009-12-01

    Volcanic risk is increasing as populations grow in active volcanic regions, and as national economies become increasingly intertwined. In addition to their significance to risk, volcanic eruption processes form a class of multiphase fluid dynamics with rich physics on many length and time scales. Risk significance, physics complexity, and the coupling of models to complex dynamic spatial datasets all demand the development of advanced computational techniques and interdisciplinary approaches to understand and forecast eruption dynamics. Innovative cyberinfrastructure is needed to enable global collaboration and novel scientific creativity, while simultaneously enabling computational thinking in real-world risk mitigation decisions - an environment where quality control, documentation, and traceability are key factors. Supported by NSF, we are developing a virtual organization, referred to as VHub, to address this need. Overarching goals of the VHub project are: Dissemination. Make advanced modeling and simulation capabilities and key data sets readily available to researchers, students, and practitioners around the world. Collaboration. Provide a mechanism for participants not only to be users but also co-developers of modeling capabilities, and contributors of experimental and observational data sets for use in modeling and simulation, in a collaborative environment that reaches far beyond local work groups. Comparison. Facilitate comparison between different models in order to provide the practitioners with guidance for choosing the "right" model, depending upon the intended use, and provide a platform for multi-model analysis of specific problems and incorporation into probabilistic assessments. Application. Greatly accelerate access and application of a wide range of modeling tools and related data sets to agencies around the world that are charged with hazard planning, mitigation, and response. Education. Provide resources that will promote the training of the

  7. Advancements in the global modelling of coastal flood hazard

    Science.gov (United States)

    Muis, Sanne; Verlaan, Martin; Nicholls, Robert J.; Brown, Sally; Hinkel, Jochen; Lincke, Daniel; Vafeidis, Athanasios T.; Scussolini, Paolo; Winsemius, Hessel C.; Ward, Philip J.

    2017-04-01

    Storm surges and high tides can cause catastrophic floods. Due to climate change and socio-economic development the potential impacts of coastal floods are increasing globally. Global modelling of coastal flood hazard provides an important perspective to quantify and effectively manage this challenge. In this contribution we show two recent advancements in global modelling of coastal flood hazard: 1) a new improved global dataset of extreme sea levels, and 2) an improved vertical datum for extreme sea levels. Both developments have important implications for estimates of exposure and inundation modelling. For over a decade, the only global dataset of extreme sea levels was the DINAS-COAST Extreme Sea Levels (DCESL), which uses a static approximation to estimate total water levels for different return periods. Recent advances have enabled the development of a new dynamically derived dataset: the Global Tide and Surge Reanalysis (GTSR) dataset. Here we present a comparison of the DCESL and GTSR extreme sea levels and the resulting global flood exposure for present-day conditions. While DCESL generally overestimates extremes, GTSR underestimates extremes, particularly in the tropics. This results in differences in estimates of flood exposure. When using the 1 in 100-year GTSR extremes, the exposed global population is 28% lower than when using the 1 in 100-year DCESL extremes. Previous studies at continental to global-scales have not accounted for the fact that GTSR and DCESL are referenced to mean sea level, whereas global elevation datasets, such as SRTM, are referenced to the EGM96 geoid. We propose a methodology to correct for the difference in vertical datum and demonstrate that this also has a large effect on exposure. For GTSR, the vertical datum correction results in a 60% increase in global exposure.

  8. Thermophysical Modeling of Potentially Hazardous Asteroid (85989) 1999 JD6

    Science.gov (United States)

    Marshall, Sean E.; Howell, Ellen S.; Vervack, Ronald J.; Magri, Christopher; Crowell, Jenna L.; Fernandez, Yanga R.; Campbell, Donald B.; Nolan, Michael C.; Reddy, Vishnu; Pravec, Petr; Bozek, Brandon

    2017-10-01

    We present thermal and photometric properties of potentially hazardous near-Earth asteroid (85989) 1999 JD6, a contact binary with a maximum breadth of three kilometers. JD6's shape and rotation state are well constrained by radar and lightcurve observations. We used the absolutely calibrated lightcurves to determine JD6's photometric properties. We observed JD6 from NASA's InfraRed Telescope Facility (IRTF) on three nights in 2010 (from 0.8 to 4 microns) and on two nights in 2015 (from 0.7 to 5 microns). Additionally, JD6 has been observed in the mid-infrared using Spitzer (in 2008 and 2009) and WISE (in 2010). We compared those observations to model spectra from our SHERMAN software to determine JD6's thermal properties.

  9. Preliminary deformation model for National Seismic Hazard map of Indonesia

    Energy Technology Data Exchange (ETDEWEB)

    Meilano, Irwan; Gunawan, Endra; Sarsito, Dina; Prijatna, Kosasih; Abidin, Hasanuddin Z. [Geodesy Research Division, Faculty of Earth Science and Technology, Institute of Technology Bandung (Indonesia); Susilo,; Efendi, Joni [Agency for Geospatial Information (BIG) (Indonesia)

    2015-04-24

    Preliminary deformation model for the Indonesia’s National Seismic Hazard (NSH) map is constructed as the block rotation and strain accumulation function at the elastic half-space. Deformation due to rigid body motion is estimated by rotating six tectonic blocks in Indonesia. The interseismic deformation due to subduction is estimated by assuming coupling on subduction interface while deformation at active fault is calculated by assuming each of the fault‘s segment slips beneath a locking depth or in combination with creeping in a shallower part. This research shows that rigid body motion dominates the deformation pattern with magnitude more than 15 mm/year, except in the narrow area near subduction zones and active faults where significant deformation reach to 25 mm/year.

  10. Efficient pan-European river flood hazard modelling through a combination of statistical and physical models

    NARCIS (Netherlands)

    Paprotny, D.; Morales Napoles, O.; Jonkman, S.N.

    2017-01-01

    Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood

  11. A Probabilistic Tsunami Hazard Study of the Auckland Region, Part I: Propagation Modelling and Tsunami Hazard Assessment at the Shoreline

    Science.gov (United States)

    Power, William; Wang, Xiaoming; Lane, Emily; Gillibrand, Philip

    2013-09-01

    Regional source tsunamis represent a potentially devastating threat to coastal communities in New Zealand, yet are infrequent events for which little historical information is available. It is therefore essential to develop robust methods for quantitatively estimating the hazards posed, so that effective mitigation measures can be implemented. We develop a probabilistic model for the tsunami hazard posed to the Auckland region of New Zealand from the Kermadec Trench and the southern New Hebrides Trench subduction zones. An innovative feature of our model is the systematic analysis of uncertainty regarding the magnitude-frequency distribution of earthquakes in the source regions. The methodology is first used to estimate the tsunami hazard at the coastline, and then used to produce a set of scenarios that can be applied to produce probabilistic maps of tsunami inundation for the study region; the production of these maps is described in part II. We find that the 2,500 year return period regional source tsunami hazard for the densely populated east coast of Auckland is dominated by events originating in the Kermadec Trench, while the equivalent hazard to the sparsely populated west coast is approximately equally due to events on the Kermadec Trench and the southern New Hebrides Trench.

  12. About Using Predictive Models and Tools To Assess Chemicals under TSCA

    Science.gov (United States)

    As part of EPA's effort to promote chemical safety, OPPT provides public access to predictive models and tools which can help inform the public on the hazards and risks of substances and improve chemical management decisions.

  13. Predictive Models and Tools for Assessing Chemicals under the Toxic Substances Control Act (TSCA)

    Science.gov (United States)

    EPA has developed databases and predictive models to help evaluate the hazard, exposure, and risk of chemicals released to the environment and how workers, the general public, and the environment may be exposed to and affected by them.

  14. Modeling Exposure to Persistent Chemicals in Hazard and Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cowan-Ellsberry, Christina E.; McLachlan, Michael S.; Arnot, Jon A.; MacLeod, Matthew; McKone, Thomas E.; Wania, Frank

    2008-11-01

    Fate and exposure modeling has not thus far been explicitly used in the risk profile documents prepared to evaluate significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of POP and PBT chemicals in the environment. The goal of this paper is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include: (1) Benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk. (2) Directly estimating the exposure of the environment, biota and humans to provide information to complement measurements, or where measurements are not available or are limited. (3) To identify the key processes and chemical and/or environmental parameters that determine the exposure; thereby allowing the effective prioritization of research or measurements to improve the risk profile. (4) Predicting future time trends including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and whether the assumptions and input data are relevant in the context of the application

  15. Chemical agnostic hazard prediction: Statistical inference of toxicity pathways - data for Figure 2

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset comprises one SigmaPlot 13 file containing measured survival data and survival data predicted from the model coefficients selected by the LASSO...

  16. Numerical Modelling of Extreme Natural Hazards in the Russian Seas

    Science.gov (United States)

    Arkhipkin, Victor; Dobrolyubov, Sergey; Korablina, Anastasia; Myslenkov, Stanislav; Surkova, Galina

    2017-04-01

    Storm surges and extreme waves are severe natural sea hazards. Due to the almost complete lack of natural observations of these phenomena in the Russian seas (Caspian, Black, Azov, Baltic, White, Barents, Okhotsk, Kara), especially about their formation, development and destruction, they have been studied using numerical simulation. To calculate the parameters of wind waves for the seas listed above, except the Barents Sea, the spectral model SWAN was applied. For the Barents and Kara seas we used WAVEWATCH III model. Formation and development of storm surges were studied using ADCIRC model. The input data for models - bottom topography, wind, atmospheric pressure and ice cover. In modeling of surges in the White and Barents seas tidal level fluctuations were used. They have been calculated from 16 harmonic constant obtained from global atlas tides FES2004. Wind, atmospheric pressure and ice cover was taken from the NCEP/NCAR reanalysis for the period from 1948 to 2010, and NCEP/CFSR reanalysis for the period from 1979 to 2015. In modeling we used both regular and unstructured grid. The wave climate of the Caspian, Black, Azov, Baltic and White seas was obtained. Also the extreme wave height possible once in 100 years has been calculated. The statistics of storm surges for the White, Barents and Azov Seas were evaluated. The contribution of wind and atmospheric pressure in the formation of surges was estimated. The technique of climatic forecast frequency of storm synoptic situations was developed and applied for every sea. The research was carried out with financial support of the RFBR (grant 16-08-00829).

  17. Web-based Services for Earth Observing and Model Data in National Applications and Hazards

    Science.gov (United States)

    Kafatos, M.; Boybeyi, Z.; Cervone, G.; di, L.; Sun, D.; Yang, C.; Yang, R.

    2005-12-01

    response system. Globus 4.0 is used for developing the geospatial information, modeling, and GIS services for geospatial computing. Various models will be supported in the Grid framework through appropriate "wrappers" to enhance interoperability with the applicable data resources. Coordination of the dispatch of requests to different computers, to provide computing services to the Grid Portals is another functionality that we will discuss. Grid Portals will identify services from a service description table and communicate with the Co-Allocator to negotiate a computing resource. The ongoing modeling efforts in SCS include numerical weather prediction, atmospheric transport and dispersion prediction, and development of computational methods to optimize the use of remotely sensed data to improve accuracy of the models. These modeling capabilities are applied to important environmental and hazard prediction problems in near real time (such as hurricane forecasting, dust storm simulations, forest fires, and other hazard predictions), using, WRF, and OMEGA models. The results from our modeling simulations combined with the global model results from national operational centers are then displayed in CEOSR web-based applications to inform the public and other stakeholders. Lessons learned will be described from national applications and hazards research in the MAGIC consortium.

  18. Research collaboration, hazard modeling and dissemination in volcanology with Vhub

    Science.gov (United States)

    Palma Lizana, J. L.; Valentine, G. A.

    2011-12-01

    Vhub (online at vhub.org) is a cyberinfrastructure for collaboration in volcanology research, education, and outreach. One of the core objectives of this project is to accelerate the transfer of research tools to organizations and stakeholders charged with volcano hazard and risk mitigation (such as observatories). Vhub offers a clearinghouse for computational models of volcanic processes and data analysis, documentation of those models, and capabilities for online collaborative groups focused on issues such as code development, configuration management, benchmarking, and validation. A subset of simulations is already available for online execution, eliminating the need to download and compile locally. In addition, Vhub is a platform for sharing presentations and other educational material in a variety of media formats, which are useful in teaching university-level volcanology. VHub also has wikis, blogs and group functions around specific topics to encourage collaboration and discussion. In this presentation we provide examples of the vhub capabilities, including: (1) tephra dispersion and block-and-ash flow models; (2) shared educational materials; (3) online collaborative environment for different types of research, including field-based studies and plume dispersal modeling; (4) workshops. Future goals include implementation of middleware to allow access to data and databases that are stored and maintained at various institutions around the world. All of these capabilities can be exercised with a user-defined level of privacy, ranging from completely private (only shared and visible to specified people) to completely public. The volcanological community is encouraged to use the resources of vhub and also to contribute models, datasets, and other items that authors would like to disseminate. The project is funded by the US National Science Foundation and includes a core development team at University at Buffalo, Michigan Technological University, and University

  19. Operational and contractual impacts in E and P offshore during predicted natural hazards

    Energy Technology Data Exchange (ETDEWEB)

    Benevides, Paulo Roberto Correa de Sa e [PETROBRAS, Rio de Janeiro, RJ (Brazil)

    2008-07-01

    Generally, when E and P operators using DP (Dynamic Positioning) are advised previously of a possible natural hazard occurrence, usually they consider it like an emergency situation and their main action is oriented only to prepare the first response and use the 'force majeure' argumentation to protect itself from any additional responsibility. When the natural phenomenon actually happens, the expenses due to the losses will be accepted because it was already considered in its budget as 'Losses due to accident' and it will be shared by the partners of the project according to the correspondent contractual terms. This paper describes real cases of the evolution of predictions for natural hazards in offshore basins in Brazil, Western Africa and Gulf of Mexico where PETROBRAS and many other oil companies have used DP operations. It proposes some alternative procedures through the BCM (Business Continuity Management) to manage natural crisis instead of the common use of the traditional 'force majeure' argumentation. (author)

  20. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    Energy Technology Data Exchange (ETDEWEB)

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  1. A semi-parametric generalization of the Cox proportional hazards regression model: Inference and Applications

    OpenAIRE

    Devarajan, Karthik; Ebrahimi, Nader

    2011-01-01

    The assumption of proportional hazards (PH) fundamental to the Cox PH model sometimes may not hold in practice. In this paper, we propose a generalization of the Cox PH model in terms of the cumulative hazard function taking a form similar to the Cox PH model, with the extension that the baseline cumulative hazard function is raised to a power function. Our model allows for interaction between covariates and the baseline hazard and it also includes, for the two sample problem, the case of two...

  2. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    disaggregation model that considers the uncertainty in the disaggregation, taking basis in the scaled Dirichlet distribution. The proposed probabilistic disaggregation model is applied to a portfolio of residential buildings in the Canton Bern, Switzerland, subject to flood risk. Thereby, the model is verified......In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature...... and the relevance to natural hazard risk assessment is illustrated....

  3. Numerical Simulations as Tool to Predict Chemical and Radiological Hazardous Diffusion in Case of Nonconventional Events

    Directory of Open Access Journals (Sweden)

    J.-F. Ciparisse

    2016-01-01

    Full Text Available CFD (Computational Fluid Dynamics simulations are widely used nowadays to predict the behaviour of fluids in pure research and in industrial applications. This approach makes it possible to get quantitatively meaningful results, often in good agreement with the experimental ones. The aim of this paper is to show how CFD calculations can help to understand the time evolution of two possible CBRNe (Chemical-Biological-Radiological-Nuclear-explosive events: (1 hazardous dust mobilization due to the interaction between a jet of air and a metallic powder in case of a LOVA (Loss Of Vacuum Accidents that is one of the possible accidents that can occur in experimental nuclear fusion plants; (2 toxic gas release in atmosphere. The scenario analysed in the paper has consequences similar to those expected in case of a release of dangerous substances (chemical or radioactive in enclosed or open environment during nonconventional events (like accidents or man-made or natural disasters.

  4. Assessment of hazard metrics for predicting field benthic invertebrate toxicity in the Detroit River, Ontario, Canada.

    Science.gov (United States)

    McPhedran, Kerry N; Grgicak-Mannion, Alice; Paterson, Gord; Briggs, Ted; Ciborowski, Jan Jh; Haffner, G Douglas; Drouillard, Ken G

    2017-03-01

    Numerical sediment quality guidelines (SQGs) are frequently used to interpret site-specific sediment chemistry and predict potential toxicity to benthic communities. These SQGs are useful for a screening line of evidence (LOE) that can be combined with other LOEs in a full weight of evidence (WOE) assessment of impacted sites. Three common multichemical hazard quotient methods (probable effect concentration [PEC]-Qavg , PEC-Qmet , and PEC-Qsum ) and a novel (hazard score [HZD]) approach were used in conjunction with a consensus-based set of SQGs to evaluate the ability of different scoring metrics to predict the biological effects of sediment contamination under field conditions. Multivariate analyses were first used to categorize river sediments into distinct habitats based on a set of physicochemical parameters to include gravel, low and high flow sand, and silt. For high flow sand and gravel, no significant dose-response relationships between numerically dominant species and various toxicity metric scores were observed. Significant dose-response relationships were observed for chironomid abundances and toxicity scores in low flow sand and silt habitats. For silt habitats, the HZD scoring metric provided the best predictor of chironomid abundances compared to various PEC-Q methods according to goodness-of-fit tests. For low flow sand habitats, PEC-Qsum followed by HZD, provided the best predictors of chironomid abundance. Differences in apparent chironomid toxicity between the 2 habitats suggest habitat-specific differences in chemical bioavailability and indicator taxa sensitivity. Using an IBI method, the HZD, PEC-Qavg , and PEC-Qmet approaches provided reasonable correlations with calculated IBI values in both silt and low flow sand habitats but not for gravel or high flow sands. Computation differences between the various multi-chemical toxicity scoring metrics and how this contributes to bias in different estimates of chemical mixture toxicity scores are

  5. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  6. CyberShake: A Physics-Based Seismic Hazard Model for Southern California

    Science.gov (United States)

    Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.

    2011-01-01

    CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and

  7. Modeling and Testing Landslide Hazard Using Decision Tree

    Directory of Open Access Journals (Sweden)

    Mutasem Sh. Alkhasawneh

    2014-01-01

    Full Text Available This paper proposes a decision tree model for specifying the importance of 21 factors causing the landslides in a wide area of Penang Island, Malaysia. These factors are vegetation cover, distance from the fault line, slope angle, cross curvature, slope aspect, distance from road, geology, diagonal length, longitude curvature, rugosity, plan curvature, elevation, rain perception, soil texture, surface area, distance from drainage, roughness, land cover, general curvature, tangent curvature, and profile curvature. Decision tree models are used for prediction, classification, and factors importance and are usually represented by an easy to interpret tree like structure. Four models were created using Chi-square Automatic Interaction Detector (CHAID, Exhaustive CHAID, Classification and Regression Tree (CRT, and Quick-Unbiased-Efficient Statistical Tree (QUEST. Twenty-one factors were extracted using digital elevation models (DEMs and then used as input variables for the models. A data set of 137570 samples was selected for each variable in the analysis, where 68786 samples represent landslides and 68786 samples represent no landslides. 10-fold cross-validation was employed for testing the models. The highest accuracy was achieved using Exhaustive CHAID (82.0% compared to CHAID (81.9%, CRT (75.6%, and QUEST (74.0% model. Across the four models, five factors were identified as most important factors which are slope angle, distance from drainage, surface area, slope aspect, and cross curvature.

  8. A flexible alternative to the Cox proportional hazards model for assessing the prognostic accuracy of hospice patient survival.

    Directory of Open Access Journals (Sweden)

    Branko Miladinovic

    Full Text Available Prognostic models are often used to estimate the length of patient survival. The Cox proportional hazards model has traditionally been applied to assess the accuracy of prognostic models. However, it may be suboptimal due to the inflexibility to model the baseline survival function and when the proportional hazards assumption is violated. The aim of this study was to use internal validation to compare the predictive power of a flexible Royston-Parmar family of survival functions with the Cox proportional hazards model. We applied the Palliative Performance Scale on a dataset of 590 hospice patients at the time of hospice admission. The retrospective data were obtained from the Lifepath Hospice and Palliative Care center in Hillsborough County, Florida, USA. The criteria used to evaluate and compare the models' predictive performance were the explained variation statistic R(2, scaled Brier score, and the discrimination slope. The explained variation statistic demonstrated that overall the Royston-Parmar family of survival functions provided a better fit (R(2 =0.298; 95% CI: 0.236-0.358 than the Cox model (R(2 =0.156; 95% CI: 0.111-0.203. The scaled Brier scores and discrimination slopes were consistently higher under the Royston-Parmar model. Researchers involved in prognosticating patient survival are encouraged to consider the Royston-Parmar model as an alternative to Cox.

  9. Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model

    Science.gov (United States)

    Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza

    2017-08-01

    Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.

  10. Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model

    Science.gov (United States)

    Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza

    2017-06-01

    Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.

  11. Modelling hazardous surface hoar layers in the mountain snowpack over space and time

    Science.gov (United States)

    Horton, Simon Earl

    Surface hoar layers are a common failure layer in hazardous snow slab avalanches. Surface hoar crystals (frost) initially form on the surface of the snow, and once buried can remain a persistent weak layer for weeks or months. Avalanche forecasters have difficulty tracking the spatial distribution and mechanical properties of these layers in mountainous terrain. This thesis presents numerical models and remote sensing methods to track the distribution and properties of surface hoar layers over space and time. The formation of surface hoar was modelled with meteorological data by calculating the downward flux of water vapour from the atmospheric boundary layer. The timing of surface hoar formation and the modelled crystal size was verified at snow study sites throughout western Canada. The major surface hoar layers over several winters were predicted with fair success. Surface hoar formation was modelled over various spatial scales using meteorological data from weather forecast models. The largest surface hoar crystals formed in regions and elevation bands with clear skies, warm and humid air, cold snow surfaces, and light winds. Field surveys measured similar regional-scale patterns in surface hoar distribution. Surface hoar formation patterns on different slope aspects were observed, but were not modelled reliably. Mechanical field tests on buried surface hoar layers found layers increased in shear strength over time, but had persistent high propensity for fracture propagation. Layers with large crystals and layers overlying hard melt-freeze crusts showed greater signs of instability. Buried surface hoar layers were simulated with the snow cover model SNOWPACK and verified with avalanche observations, finding most hazardous surface hoar layers were identified with a structural stability index. Finally, the optical properties of surface hoar crystals were measured in the field with spectral instruments. Large plate-shaped crystals were less reflective at shortwave

  12. After the damages: Lessons learned from recent earthquakes for ground-motion prediction and seismic hazard assessment (C.F. Gauss Lecture)

    Science.gov (United States)

    Cotton, Fabrice

    2017-04-01

    Recent damaging earthquakes (e.g. Japan 2011, Nepal 2014, Italy 2016) and associated ground-shaking (ground-motion) records challenge the engineering models used to quantify seismic hazard. The goal of this presentation is to present the lessons learned from these recent events and discuss their implications for ground-motion prediction and probabilistic seismic hazard assessment. The following points will be particularly addressed: 1) Recent observations clearly illustrate the dependency of ground-shaking on earthquake source related factors (e.g. fault properties and geometry, earthquake depth, directivity). The weaknesses of classical models and the impact of these factors on hazard evaluation will be analysed and quantified. 2) These observations also show that events of similar magnitude and style of faulting are producing ground-motions which are highly variable. We will analyse this variability and show that the exponential growth of recorded data give a unique opportunity to quantify regional or between-events shaking variations. Indeed, most seismic-hazard evaluations do not consider the regional specificities of earthquake or wave-propagation properties. There is little guidance in the literature on how this should be done and we will show that this challenge is interdisciplinary, as structural geology, neotectonic and tomographic images can provide key understanding of these regional variations. 3) One of the key lessons of recent earthquakes is that extreme hazard scenarios and ground-shaking are difficult to predict. In other words, we need to mobilize "scientific imagination" and define new strategies based on the latest research results to capture epistemic uncertainties and integrate them in engineering seismology projects. We will discuss these strategies and show an example of their implementation to develop new seismic hazard maps of Europe (Share and Sera FP7 projects) and Germany.

  13. A semi-parametric generalization of the Cox proportional hazards regression model: Inference and Applications.

    Science.gov (United States)

    Devarajan, Karthik; Ebrahimi, Nader

    2011-01-01

    The assumption of proportional hazards (PH) fundamental to the Cox PH model sometimes may not hold in practice. In this paper, we propose a generalization of the Cox PH model in terms of the cumulative hazard function taking a form similar to the Cox PH model, with the extension that the baseline cumulative hazard function is raised to a power function. Our model allows for interaction between covariates and the baseline hazard and it also includes, for the two sample problem, the case of two Weibull distributions and two extreme value distributions differing in both scale and shape parameters. The partial likelihood approach can not be applied here to estimate the model parameters. We use the full likelihood approach via a cubic B-spline approximation for the baseline hazard to estimate the model parameters. A semi-automatic procedure for knot selection based on Akaike's Information Criterion is developed. We illustrate the applicability of our approach using real-life data.

  14. An enhanced fire hazard assessment model and validation experiments for vertical cable trays

    Energy Technology Data Exchange (ETDEWEB)

    Li, Lu [Sate Key Laboratory of Fire Science, University of Science and Technology of China, Hefei 230027 (China); Huang, Xianjia, E-mail: huangxianjia@gziit.ac.cn [Joint Laboratory of Fire Safety in Nuclear Power Plants, Institute of Industry Technology Guangzhou & Chinese Academy of Sciences, Guangzhou 511458 (China); Bi, Kun; Liu, Xiaoshuang [China Nuclear Power Design Co., Ltd., Shenzhen 518045 (China)

    2016-05-15

    Highlights: • An enhanced model was developed for vertical cable fire hazard assessment in NPP. • The validated experiments on vertical cable tray fires were conducted. • The capability of the model for cable tray with different cable spacing were tested. - Abstract: The model, referred to as FLASH-CAT (Flame Spread over Horizontal Cable Trays), was developed to estimate the heat release rate for vertical cable tray fire. The focus of this work is to investigate the application of an enhanced model to the single vertical cable tray fires with different cable spacing. The experiments on vertical cable tray fires with three typical cable spacing were conducted. The histories of mass loss rate and flame length were recorded during the cable fire. From the experimental results, it is found that the space between cable lines intensifies the cable combustion and accelerates the flame spread. The predictions by the enhanced model show good agreements with the experimental data. At the same time, it is shown that the enhanced model is capable of predicting the different behaviors of cable fires with different cable spacing by adjusting the flame spread speed only.

  15. A multidimensional stability model for predicting shallow landslide size and shape across landscapes

    Science.gov (United States)

    David G. Milledge; Dino Bellugi; Jim A. McKean; Alexander L. Densmore; William E. Dietrich

    2014-01-01

    The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but...

  16. A summary of hazard datasets and guidelines supported by the Global Earthquake Model during the first implementation phase

    Directory of Open Access Journals (Sweden)

    Marco Pagani

    2015-04-01

    Full Text Available The Global Earthquake Model (GEM initiative promotes open, transparent and collaborative science aimed at the assessment of earthquake risk and its reduction worldwide. During the first implementation phase (2009-2014 GEM sponsored five projects aimed at the creation of global datasets and guidelines toward the creation of open, transparent and, as far as possible, homogeneous hazard input models. These projects concentrated on the following global databases and models: an instrumental catalogue, a historical earthquake archive and catalogue, a geodetic strain rate model, a database of active faults, and set of ground motion prediction equations. This paper describes the main outcomes of these projects illustrating some initial applications as well as challenges in the creation of hazard models.

  17. Validating a Hazardous Drinking Index in a Sample of Sexual Minority Women: Reliability, Validity and Predictive Accuracy

    Science.gov (United States)

    Riley, Barth B.; Hughes, Tonda L.; Wilsnack, Sharon C.; Johnson, Timothy P.; Benson, Perry; Aranda, Frances

    2017-01-01

    Background Although sexual minority women (SMW) are at increased risk of hazardous drinking (HD), efforts to validate HD measures have yet to focus on this population. Objectives Validation of a 13-item Hazardous Drinking Index (HDI) in a large sample of SMW. Methods Data were from 700 adult SMW (age 18–82) enrolled in the Chicago Health and Life Experiences of Women study. Criterion measures included counts of depressive symptoms and post-traumatic stress disorder (PTSD) symptoms, average daily and 30-day ethanol consumption, risky sexual behavior, and Diagnostic and Statistical Manual (DSM-IV) measures of alcohol abuse/dependence. Analyses included assessment of internal consistency, construction of receiver operating characteristic (ROC) curves to predict alcohol abuse/dependence, and correlations between HDI and criterion measures. We compared the psychometric properties (diagnostic accuracy and correlates of hazardous drinking) of the HDI to the commonly used CAGE instrument. Results KR-20 reliability for the HDI was 0.80, compared to 0.74 for the CAGE. Predictive accuracy, as measured by the area under the receiver operating characteristic curve for alcohol abuse/dependence, was HDI: 0.89; CAGE: 0.84. The HDI evidenced the best predictive efficacy and tradeoff between sensitivity and specificity. Results supported the concurrent validity of the HDI measure. Conclusions The Hazardous Drinking Index is a reliable and valid measure of hazardous drinking for sexual minority women. PMID:27661289

  18. Modelling the costs of natural hazards in games

    Science.gov (United States)

    Bostenaru-Dan, M.

    2012-04-01

    City are looked for today, including a development at the University of Torino called SimTorino, which simulates the development of the city in the next 20 years. The connection to another games genre as video games, the board games, will be investigated, since there are games on construction and reconstruction of a cathedral and its tower and a bridge in an urban environment of the middle ages based on the two novels of Ken Follett, "Pillars of the Earth" and "World Without End" and also more recent games, such as "Urban Sprawl" or the Romanian game "Habitat", dealing with the man-made hazard of demolition. A review of these games will be provided based on first hand playing experience. In games like "World without End" or "Pillars of the Earth", just like in the recently popular games of Zynga on social networks, construction management is done through providing "building" an item out of stylised materials, such as "stone", "sand" or more specific ones as "nail". Such approach could be used also for retrofitting buildings for earthquakes, in the series of "upgrade", not just for extension as it is currently in games, and this is what our research is about. "World without End" includes a natural disaster not so analysed today but which was judged by the author as the worst of manhood: the Black Death. The Black Death has effects and costs as well, not only modelled through action cards, but also on the built environment, by buildings remaining empty. On the other hand, games such as "Habitat" rely on role playing, which has been recently recognised as a way to bring games theory to decision making through the so-called contribution of drama, a way to solve conflicts through balancing instead of weighting, and thus related to Analytic Hierarchy Process. The presentation aims to also give hints on how to design a game for the problem of earthquake retrofit, translating the aims of the actors in such a process into role playing. Games are also employed in teaching of urban

  19. Eolian Modeling System: Predicting Windblown Dust Hazards in Battlefield Environments

    Science.gov (United States)

    2011-05-03

    hiatuses, with few hiatuses of very long duration and many of short duration, analogous to the Gutenberg -Richter power-law distribution of earthquake...Fortymile Wash alluvial fan are fairly thin (ɝ cm) and spatially discontinuous . The maxi- mum relief from the oldest unit (Qa3) to the active channels...sand; floating clasts; discontinuous Av with max. thickness 50 mm 16 (area only includes internal portions of fan) Qa7 Historical None Variable N/A

  20. Development of algal interspecies correlation estimation models for chemical hazard assessment.

    Science.gov (United States)

    Brill, Jessica L; Belanger, Scott E; Chaney, Joel G; Dyer, Scott D; Raimondo, Sandy; Barron, Mace G; Pittinger, Charles A

    2016-09-01

    Web-based Interspecies Correlation Estimation (ICE) is an application developed to predict the acute toxicity of a chemical from 1 species to another taxon. Web-ICE models use the acute toxicity value for a surrogate species to predict effect values for other species, thus potentially filling in data gaps for a variety of environmental assessment purposes. Web-ICE has historically been dominated by aquatic and terrestrial animal prediction models. Web-ICE models for algal species were essentially absent and are addressed in the present study. A compilation of public and private sector-held algal toxicity data were compiled and reviewed for quality based on relevant aspects of individual studies. Interspecies correlations were constructed from the most commonly tested algal genera for a broad spectrum of chemicals. The ICE regressions were developed based on acute 72-h and 96-h endpoint values involving 1647 unique studies on 476 unique chemicals encompassing 40 genera and 70 species of green, blue-green, and diatom algae. Acceptance criteria for algal ICE models were established prior to evaluation of individual models and included a minimum sample size of 3, a statistically significant regression slope, and a slope estimation parameter ≥0.65. A total of 186 ICE models were possible at the genus level, with 21 meeting quality criteria; and 264 ICE models were developed at the species level, with 32 meeting quality criteria. Algal ICE models will have broad utility in screening environmental hazard assessments, data gap filling in certain regulatory scenarios, and as supplemental information to derive species sensitivity distributions. Environ Toxicol Chem 2016;35:2368-2378. Published 2016 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America. Published 2016 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public

  1. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    Science.gov (United States)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including

  2. Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy

    OpenAIRE

    Blahut, J.; P. Horton; Sterlacchini, S.; Jaboyedoff, M.

    2010-01-01

    Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of th...

  3. Distribution modeling of hazardous airborne emissions from industrial campuses in Iraq via GIS techniques

    Science.gov (United States)

    Salwan Al-Hasnawi, S.; Salam Bash AlMaliki, J.; Falih Nazal, Zainab

    2017-08-01

    The presence of considerable amounts of hazardous elements in air may represent prolonged lethal effects for the residential and/or commercial campuses and activities, especially those around the emission activities, hence it is so important to monitor and anticipate these concentrations and design an effective spatial forecasting models for that sake. Geographic information systems GIS were utilized to monitor, analyze and model the presence and concentrations for airborne Pb, Cr, and Zn elements in the atmosphere around certain industrial campuses at the northern part of Iraq. Diffusion patterns were determined for these elements via the adaptation of GIS extension; the geostatistical and spatial analysis that implement Kriging and inverse distance weighted (IDW) methods to interpolate a raster surface. The main determination factors like wind speed, ambient temperature and topographic distributions were considered in order to design a prediction model that serves as early alert of future possible accidents. Results of eight months observation program have proved that the concentrations of the three elements had significantly exceeded the Iraqi and WHO limits at most of the observed locations especially for summer times. Also, the predicted models were validated with the field measures and have proved close match especially for the geostatistical analysis map that had around 4% percentage error for the three tested elements.

  4. Non-Poissonian earthquake occurrence in coupled stress release models and its effect on seismic hazard

    Science.gov (United States)

    Kuehn, N. M.; Hainzl, S.; Scherbaum, F.

    2008-08-01

    Most seismic hazard estimations are based on the assumption of a Poisson process for earthquake occurrence, even though both observations and models indicate a departure of real seismic sequences from this simplistic assumption. Instrumental earthquake catalogues show earthquake clustering on regional scales while the elastic rebound theory predicts a periodic recurrence of characteristic earthquakes on longer timescales for individual events. Recent implementations of time-dependent hazard calculations in California and Japan are based on quasi-periodic recurrences of fault ruptures according to renewal models such as the Brownian Passage Time model. However, these renewal models neglect earthquake interactions and the dependence on the stressing history which might destroy any regularity of earthquake recurrences in reality. To explore this, we investigate the (coupled) stress release model, a stochastic version of the elastic rebound hypothesis. In particular, we are interested in the time-variability of the occurrence of large earthquakes and its sensitivity to the occurrence of Gutenberg-Richter type earthquake activity and fault interactions. Our results show that in general large earthquakes occur quasi-periodically in the model: the occurrence probability of large earthquakes is strongly decreased shortly after a strong event and becomes constant on longer timescales. Although possible stress-interaction between adjacent fault zones does not affect the recurrence time distributions in each zone significantly, it leads to a temporal clustering of events on larger regional scales. The non-random characteristics, especially the quasi-periodic behaviour of large earthquakes, are even more pronounced if stress changes due to small earthquakes are less important. The recurrence-time distribution for the largest events is characterized by a coefficient of variation from 0.6 to 0.84 depending on the relative importance of small earthquakes.

  5. Prediction of rockburst probability given seismic energy and factors defined by the expert method of hazard evaluation (MRG)

    Science.gov (United States)

    Kornowski, Jerzy; Kurzeja, Joanna

    2012-04-01

    In this paper we suggest that conditional estimator/predictor of rockburst probability (and rockburst hazard, P T(t)) can be approximated with the formula P T(t) = P 1(θ 1)…P N(θ N)·P dynT(t), where P dynT(t) is a time-dependent probability of rockburst given only the predicted seismic energy parameters, while P i(θ i) are amplifying coefficients due to local geologic and mining conditions, as defined by the Expert Method of (rockburst) Hazard Evaluation (MRG) known in the Polish mining industry. All the elements of the formula are (approximately) calculable (on-line) and the resulting P T value satisfies inequalities 0 ≤ P T(t) ≤ 1. As a result, the hazard space (0-1) can be always divided into smaller subspaces (e.g., 0-10-5, 10-5-10-4, 10-4-10-3, 10-3-1), possibly named with symbols (e.g., A, B, C, D, …) called "hazard states" — which saves the prediction users from worrying of probabilities. The estimator P T can be interpreted as a formal statement of (reformulated) Comprehensive Method of Rockburst State of Hazard Evaluation, well known in Polish mining industry. The estimator P T is natural, logically consistent and physically interpretable. Due to full formalization, it can be easily generalized, incorporating relevant information from other sources/methods.

  6. Quantitative physical models of volcanic phenomena for hazards assessment of critical infrastructures

    Science.gov (United States)

    Costa, Antonio

    2016-04-01

    Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.

  7. An example of debris-flows hazard modeling using GIS

    Directory of Open Access Journals (Sweden)

    L. Melelli

    2004-01-01

    Full Text Available We present a GIS-based model for predicting debris-flows occurrence. The availability of two different digital datasets and the use of a Digital Elevation Model (at a given scale have greatly enhanced our ability to quantify and to analyse the topography in relation to debris-flows. In particular, analysing the relationship between debris-flows and the various causative factors provides new understanding of the mechanisms. We studied the contact zone between the calcareous basement and the fluvial-lacustrine infill adjacent northern area of the Terni basin (Umbria, Italy, and identified eleven basins and corresponding alluvial fans. We suggest that accumulations of colluvium in topographic hollows, whatever the sources might be, should be considered potential debris-flow source areas. In order to develop a susceptibility map for the entire area, an index was calculated from the number of initiation locations in each causative factor unit divided by the areal extent of that unit within the study area. This index identifies those units that produce the most debris-flows in each Representative Elementary Area (REA. Finally, the results are presented with the advantages and the disadvantages of the approach, and the need for further research.

  8. Toward a coupled Hazard-Vulnerability Tool for Flash Flood Impacts Prediction

    Science.gov (United States)

    Terti, Galateia; Ruin, Isabelle; Anquetin, Sandrine; Gourley, Jonathan J.

    2015-04-01

    Flash floods (FF) are high-impact, catastrophic events that result from the intersection of hydrometeorological extremes and society at small space-time scales, generally on the order of minutes to hours. Because FF events are generally localized in space and time, they are very difficult to forecast with precision and can subsequently leave people uninformed and subject to surprise in the midst of their daily activities (e.g., commuting to work). In Europe, FFs are the main source of natural hazard fatalities, although they affect smaller areas than riverine flooding. In the US, also, flash flooding is the leading cause of weather-related deaths most years, with some 200 annual fatalities. There were 954 fatalities and approximately 31 billion U.S. dollars of property damage due to floods and flash floods from 1995 to 2012 in the US. For forecasters and emergency managers the prediction of and subsequent response to impacts due to such a sudden onset and localized event remains a challenge. This research is motivated by the hypothesis that the intersection of the spatio-temporal context of the hazard with the distribution of people and their characteristics across space and time reveals different paths of vulnerability. We argue that vulnerability and the dominant impact type varies dynamically throughout the day and week according to the location under concern. Thus, indices are appropriate to develop and provide, for example, vehicle-related impacts on active population being focused on the road network during morning or evening rush hours. This study describes the methodological developments of our approach and applies our hypothesis to the case of the June 14th, 2010 flash flood event in the Oklahoma City area (Oklahoma, US). Social (i.e. population socio-economic profile), exposure (i.e. population distribution, land use), and physical (i.e. built and natural environment) data are used to compose different vulnerability products based on the forecast location

  9. Computer models used to support cleanup decision-making at hazardous and radioactive waste sites

    Energy Technology Data Exchange (ETDEWEB)

    Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.

    1992-07-01

    Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.

  10. Testing predictive performance of binary choice models

    NARCIS (Netherlands)

    A.C.D. Donkers (Bas); B. Melenberg (Bertrand)

    2002-01-01

    textabstractBinary choice models occur frequently in economic modeling. A measure of the predictive performance of binary choice models that is often reported is the hit rate of a model. This paper develops a test for the outperformance of a predictor for binary outcomes over a naive prediction

  11. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  12. Bayes estimation of the mixture of hazard-rate model

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, K.K.; Krishna, Hare; Singh, Bhupendra

    1997-01-01

    Engineering systems are subject to continuous stresses and shocks which may (or may not) cause a change in the failure pattern of the system with unknown probability q( =1 - p), 0 < p < 1. Conceptualising a mixture of hazard-rate or failure-rate patterns representing a realistic situation, the failure time distribution is given in the corresponding case. Classical and Bayesian estimation of the parameters and reliability characteristics of this failure time distribution is the subject matter of the present study.

  13. Occupational hazard evaluation model underground coal mine based on unascertained measurement theory

    Science.gov (United States)

    Deng, Quanlong; Jiang, Zhongan; Sun, Yaru; Peng, Ya

    2017-05-01

    In order to study how to comprehensively evaluate the influence of several occupational hazard on miners’ physical and mental health, based on unascertained measurement theory, occupational hazard evaluation indicator system was established to make quantitative and qualitative analysis. Determining every indicator weight by information entropy and estimating the occupational hazard level by credible degree recognition criteria, the evaluation model was programmed by Visual Basic, applying the evaluation model to occupational hazard comprehensive evaluation of six posts under a coal mine, and the occupational hazard degree was graded, the evaluation results are consistent with actual situation. The results show that dust and noise is most obvious among the coal mine occupational hazard factors. Excavation face support workers are most affected, secondly, heading machine drivers, coal cutter drivers, coalface move support workers, the occupational hazard degree of these four types workers is II mild level. The occupational hazard degree of ventilation workers and safety inspection workers is I level. The evaluation model could evaluate underground coal mine objectively and accurately, and can be employed to the actual engineering.

  14. An integrated approach to flood hazard assessment on alluvial fans using numerical modeling, field mapping, and remote sensing

    Science.gov (United States)

    Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.

    2005-01-01

    Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In

  15. a study of the slope of cox proportional hazard and weibull models

    African Journals Online (AJOL)

    Adejumo & Ahmadu

    ABSTRACT. Parametric models require that the distribution of survival time is known and the hazard function is completely specified except for the values of the unknown parameters. These include the Weibull model, the exponential model, and the log-normal model. In this research work, Weibull Model is used for ...

  16. Combining modeling and gaming for predictive analytics

    National Research Council Canada - National Science Library

    Riensche, Roderick M; Whitney, Paul D

    2012-01-01

    .... In this paper we describe our approach of combining modeling and gaming disciplines to develop predictive capabilities, using formal models to inform game development, and using games to provide data for modeling...

  17. Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy

    Directory of Open Access Journals (Sweden)

    J. Blahut

    2010-11-01

    Full Text Available Debris flow hazard modelling at medium (regional scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal, and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy. The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R, developed at the University of Lausanne (Switzerland. An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise

  18. Moving the Hazard Prediction and Assessment Capability to a Distributed, Portable Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Lee, RW

    2002-09-05

    The Hazard Prediction and Assessment Capability (HPAC) has been re-engineered from a Windows application with tight binding between computation and a graphical user interface (GUI) to a new distributed object architecture. The key goals of this new architecture are platform portability, extensibility, deployment flexibility, client-server operations, easy integration with other systems, and support for a new map-based GUI. Selection of Java as the development and runtime environment is the major factor in achieving each of the goals, platform portability in particular. Portability is further enforced by allowing only Java components in the client. Extensibility is achieved via Java's dynamic binding and class loading capabilities and a design by interface approach. HPAC supports deployment on a standalone host, as a heavy client in client-server mode with data stored on the client but calculations performed on the server host, and as a thin client with data and calculations on the server host. The principle architectural element supporting deployment flexibility is the use of Universal Resource Locators (URLs) for all file references. Java WebStart{trademark} is used for thin client deployment. Although there were many choices for the object distribution mechanism, the Common Object Request Broker Architecture (CORBA) was chosen to support HPAC client server operation. HPAC complies with version 2.0 of the CORBA standard and does not assume support for pass-by-value method arguments. Execution in standalone mode is expedited by having most server objects run in the same process as client objects, thereby bypassing CORBA object transport. HPAC provides four levels for access by other tools and systems, starting with a Windows library providing transport and dispersion (T&D) calculations and output generation, detailed and more abstract sets of CORBA services, and reusable Java components.

  19. Evaluating Approaches to a Coupled Model for Arctic Coastal Erosion, Infrastructure Risk, and Associated Coastal Hazards

    Science.gov (United States)

    Frederick, J. M.; Bull, D. L.; Jones, C.; Roberts, J.; Thomas, M. A.

    2016-12-01

    Arctic coastlines are receding at accelerated rates, putting existing and future activities in the developing coastal Arctic environment at extreme risk. For example, at Oliktok Long Range Radar Site, erosion that was not expected until 2040 was reached as of 2014 (Alaska Public Media). As the Arctic Ocean becomes increasingly ice-free, rates of coastal erosion will likely continue to increase as (a) increased ice-free waters generate larger waves, (b) sea levels rise, and (c) coastal permafrost soils warm and lose strength/cohesion. Due to the complex and rapidly varying nature of the Arctic region, little is known about the increasing waves, changing circulation, permafrost soil degradation, and the response of the coastline to changes in these combined conditions. However, as scientific focus has been shifting towards the polar regions, Arctic science is rapidly advancing, increasing our understanding of complex Arctic processes. Our present understanding allows us to begin to develop and evaluate the coupled models necessary for the prediction of coastal erosion in support of Arctic risk assessments. What are the best steps towards the development of a coupled model for Arctic coastal erosion? This work focuses on our current understanding of Arctic conditions and identifying the tools and methods required to develop an integrated framework capable of accurately predicting Arctic coastline erosion and assessing coastal risk and hazards. We will present a summary of the state-of-the-science, and identify existing tools and methods required to develop an integrated diagnostic and monitoring framework capable of accurately predicting and assessing Arctic coastline erosion, infrastructure risk, and coastal hazards. The summary will describe the key coastal processes to simulate, appropriate models to use, effective methods to couple existing models, and identify gaps in knowledge that require further attention to make progress in our understanding of Arctic coastal

  20. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  1. Conceptual Development of a National Volcanic Hazard Model for New Zealand

    Directory of Open Access Journals (Sweden)

    Mark Stirling

    2017-06-01

    Full Text Available We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration, and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g., short-term forecasting. The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.

  2. Conceptual Development of a National Volcanic Hazard Model for New Zealand

    Science.gov (United States)

    Stirling, Mark; Bebbington, Mark; Brenna, Marco; Cronin, Shane; Christophersen, Annemarie; Deligne, Natalia; Hurst, Tony; Jolly, Art; Jolly, Gill; Kennedy, Ben; Kereszturi, Gabor; Lindsay, Jan; Neall, Vince; Procter, Jonathan; Rhoades, David; Scott, Brad; Shane, Phil; Smith, Ian; Smith, Richard; Wang, Ting; White, James D. L.; Wilson, Colin J. N.; Wilson, Tom

    2017-06-01

    We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g. short-term forecasting). The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.

  3. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  4. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  5. A model checking method for the proportional hazards model with recurrent gap time data.

    Science.gov (United States)

    Huang, Chiung-Yu; Luo, Xianghua; Follmann, Dean A

    2011-07-01

    Recurrent events are the natural outcome in many medical and epidemiology studies. To assess covariate effects on the gaps between consecutive recurrent events, the Cox proportional hazards model is frequently employed in data analysis. The validity of statistical inference, however, depends on the appropriateness of the Cox model. In this paper, we propose a class of graphical techniques and formal tests for checking the Cox model with recurrent gap time data. The building block of our model checking method is an averaged martingale-like process, based on which a class of multiparameter stochastic processes is proposed. This maneuver is very general and can be used to assess different aspects of model fit. Numerical simulations are conducted to examine finite-sample performance, and the proposed model checking techniques are illustrated with data from the Danish Psychiatric Central Register.

  6. Climate Prediction Center(CPC)Global Tropics Hazards and Benefits Assessment

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Global Tropics Hazards and Benefits Assessment (GTH) is an outlook product for the areas in the Tropics. Forecasts for the Week-1 and Week-2 period are given for...

  7. Logistic Regression for Seismically Induced Landslide Predictions: Using Uniform Hazard and Geophysical Layers as Predictor Variables

    Science.gov (United States)

    Nowicki, M. A.; Hearne, M.; Thompson, E.; Wald, D. J.

    2012-12-01

    Seismically induced landslides present a costly and often fatal threats in many mountainous regions. Substantial effort has been invested to understand where seismically induced landslides may occur in the future. Both slope-stability methods and, more recently, statistical approaches to the problem are described throughout the literature. Though some regional efforts have succeeded, no uniformly agreed-upon method is available for predicting the likelihood and spatial extent of seismically induced landslides. For use in the U. S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, we would like to routinely make such estimates, in near-real time, around the globe. Here we use the recently produced USGS ShakeMap Atlas of historic earthquakes to develop an empirical landslide probability model. We focus on recent events, yet include any digitally-mapped landslide inventories for which well-constrained ShakeMaps are also available. We combine these uniform estimates of the input shaking (e.g., peak acceleration and velocity) with broadly available susceptibility proxies, such as topographic slope and surface geology. The resulting database is used to build a predictive model of the probability of landslide occurrence with logistic regression. The landslide database includes observations from the Northridge, California (1994); Wenchuan, China (2008); ChiChi, Taiwan (1999); and Chuetsu, Japan (2004) earthquakes; we also provide ShakeMaps for moderate-sized events without landslide for proper model testing and training. The performance of the regression model is assessed with both statistical goodness-of-fit metrics and a qualitative review of whether or not the model is able to capture the spatial extent of landslides for each event. Part of our goal is to determine which variables can be employed based on globally-available data or proxies, and whether or not modeling results from one region are transferrable to

  8. Challenges in understanding, modelling, and mitigating Lake Outburst Flood Hazard: experiences from Central Asia

    Science.gov (United States)

    Mergili, Martin; Schneider, Demian; Andres, Norina; Worni, Raphael; Gruber, Fabian; Schneider, Jean F.

    2010-05-01

    Lake Outburst Floods can evolve from complex process chains like avalanches of rock or ice that produce flood waves in a lake which may overtop and eventually breach glacial, morainic, landslide, or artificial dams. Rising lake levels can lead to progressive incision and destabilization of a dam, to enhanced ground water flow (piping), or even to hydrostatic failure of ice dams which can cause sudden outflow of accumulated water. These events often have a highly destructive potential because a large amount of water is released in a short time, with a high capacity to erode loose debris, leading to a powerful debris flow with a long travel distance. The best-known example of a lake outburst flood is the Vajont event (Northern Italy, 1963), where a landslide rushed into an artificial lake which spilled over and caused a flood leading to almost 2000 fatalities. Hazards from the failure of landslide dams are often (not always) fairly manageable: most breaches occur in the first few days or weeks after the landslide event and the rapid construction of a spillway - though problematic - has solved some hazardous situations (e.g. in the case of Hattian landslide in 2005 in Pakistan). Older dams, like Usoi dam (Lake Sarez) in Tajikistan, are usually fairly stable, though landsildes into the lakes may create floodwaves overtopping and eventually weakening the dams. The analysis and the mitigation of glacial lake outburst flood (GLOF) hazard remains a challenge. A number of GLOFs resulting in fatalities and severe damage have occurred during the previous decades, particularly in the Himalayas and in the mountains of Central Asia (Pamir, Tien Shan). The source area is usually far away from the area of impact and events occur at very long intervals or as singularities, so that the population at risk is usually not prepared. Even though potentially hazardous lakes can be identified relatively easily with remote sensing and field work, modeling and predicting of GLOFs (and also

  9. Numerical modelling for real-time forecasting of marine oil pollution and hazard assessment

    Science.gov (United States)

    De Dominicis, Michela; Pinardi, Nadia; Bruciaferri, Diego; Liubartseva, Svitlana

    2015-04-01

    (MEDESS4MS) system, which is an integrated operational multi-model oil spill prediction service, that can be used by different users to run simulations of oil spills at sea, even in real time, through a web portal. The MEDESS4MS system gathers different oil spill modelling systems and data from meteorological and ocean forecasting systems, as well as operational information on response equipment, together with environmental and socio-economic sensitivity maps. MEDSLIK-II has been also used to provide an assessment of hazard stemming from operational oil ship discharges in the Southern Adriatic and Northern Ionian (SANI) Seas. Operational pollution resulting from ships consists of a movable hazard with a magnitude that changes dynamically as a result of a number of external parameters varying in space and time (temperature, wind, sea currents). Simulations of oil releases have been performed with realistic oceanographic currents and the results show that the oil pollution hazard distribution has an inherent spatial and temporal variability related to the specific flow field variability.

  10. A Mixture Proportional Hazards Model with Random Effects for Response Times in Tests

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2016-01-01

    In this article, a new model for test response times is proposed that combines latent class analysis and the proportional hazards model with random effects in a similar vein as the mixture factor model. The model assumes the existence of different latent classes. In each latent class, the response times are distributed according to a…

  11. Stochastic predictive control with adaptive model maintenance

    OpenAIRE

    Bavdekar, VA; Ehlinger, V; Gidon, D; Mesbah, A.

    2016-01-01

    © 2016 IEEE. The closed-loop performance of model-based controllers often degrades over time due to increased model uncertainty. Some form of model maintenance must be performed to regularly adapt the system model using closed-loop data. This paper addresses the problem of control-oriented model adaptation in the context of predictive control of stochastic linear systems. A stochastic predictive control approach is presented that integrates stochastic optimal control with control-oriented inp...

  12. Efficient pan-European river flood hazard modelling through a combination of statistical and physical models

    Directory of Open Access Journals (Sweden)

    D. Paprotny

    2017-07-01

    Full Text Available Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood mapping for Europe. A Bayesian-network-based model built in a previous study is employed to generate return-period flow rates in European rivers with a catchment area larger than 100 km2. The simulations are performed using a one-dimensional steady-state hydraulic model and the results are post-processed using Geographical Information System (GIS software in order to derive flood zones. This approach is validated by comparison with Joint Research Centre's (JRC pan-European map and five local flood studies from different countries. Overall, the two approaches show a similar performance in recreating flood zones of local maps. The simplified approach achieved a similar level of accuracy, while substantially reducing the computational time. The paper also presents the aggregated results on the flood hazard in Europe, including future projections. We find relatively small changes in flood hazard, i.e. an increase of flood zones area by 2–4 % by the end of the century compared to the historical scenario. However, when current flood protection standards are taken into account, the flood-prone area increases substantially in the future (28–38 % for a 100-year return period. This is because in many parts of Europe river discharge with the same return period is projected to increase in the future, thus making the protection standards insufficient.

  13. Efficient pan-European river flood hazard modelling through a combination of statistical and physical models

    Science.gov (United States)

    Paprotny, Dominik; Morales-Nápoles, Oswaldo; Jonkman, Sebastiaan N.

    2017-07-01

    Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood mapping for Europe. A Bayesian-network-based model built in a previous study is employed to generate return-period flow rates in European rivers with a catchment area larger than 100 km2. The simulations are performed using a one-dimensional steady-state hydraulic model and the results are post-processed using Geographical Information System (GIS) software in order to derive flood zones. This approach is validated by comparison with Joint Research Centre's (JRC) pan-European map and five local flood studies from different countries. Overall, the two approaches show a similar performance in recreating flood zones of local maps. The simplified approach achieved a similar level of accuracy, while substantially reducing the computational time. The paper also presents the aggregated results on the flood hazard in Europe, including future projections. We find relatively small changes in flood hazard, i.e. an increase of flood zones area by 2-4 % by the end of the century compared to the historical scenario. However, when current flood protection standards are taken into account, the flood-prone area increases substantially in the future (28-38 % for a 100-year return period). This is because in many parts of Europe river discharge with the same return period is projected to increase in the future, thus making the protection standards insufficient.

  14. Snakes as hazards: modelling risk by chasing chimpanzees.

    Science.gov (United States)

    McGrew, William C

    2015-04-01

    Snakes are presumed to be hazards to primates, including humans, by the snake detection hypothesis (Isbell in J Hum Evol 51:1-35, 2006; Isbell, The fruit, the tree, and the serpent. Why we see so well, 2009). Quantitative, systematic data to test this idea are lacking for the behavioural ecology of living great apes and human foragers. An alternative proxy is snakes encountered by primatologists seeking, tracking, and observing wild chimpanzees. We present 4 years of such data from Mt. Assirik, Senegal. We encountered 14 species of snakes a total of 142 times. Almost two-thirds of encounters were with venomous snakes. Encounters occurred most often in forest and least often in grassland, and more often in the dry season. The hypothesis seems to be supported, if frequency of encounter reflects selective risk of morbidity or mortality.

  15. Empirical Tests of the Predicted Footprint for Uncontrolled Satellite Reentry Hazards

    Science.gov (United States)

    Matney, Mark

    2011-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, material, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. Because this information is used in making policy and engineering decisions, it is important that these assumptions be tested using empirical data. This study uses the latest database of known uncontrolled reentry locations measured by the United States Department of Defense. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors in the final stages of reentry - including the effects of gravitational harmonics, the effects of the Earth s equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and possibly change the probability of reentering over a given location. In this paper, the measured latitude and longitude distributions of these objects are directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  16. Packet loss rate prediction using the sparse basis prediction model.

    Science.gov (United States)

    Atiya, Amir E; Yoo, Sung Goo; Chong, Kil To; Kim, Hyongsuk

    2007-05-01

    The quality of multimedia communicated through the Internet is highly sensitive to packet loss. In this letter, we develop a time-series prediction model for the end-to-end packet loss rate (PLR). The estimate of the PLR is needed in several transmission control mechanisms such as the TCP-friendly congestion control mechanism for UDP traffic. In addition, it is needed to estimate the amount of redundancy for the forward error correction (FEC) mechanism. An accurate prediction would therefore be very valuable. We used a relatively novel prediction model called sparse basis prediction model. It is an adaptive nonlinear prediction approach, whereby a very large dictionary of possible inputs are extracted from the time series (for example, through moving averages, some nonlinear transformations, etc.). Only few of the very best inputs among the dictionary are selected and are combined linearly. An algorithm adaptively updates the input selection (as well as updates the weights) each time a new time sample arrives in a computationally efficient way. Simulation experiments indicate significantly better prediction performance for the sparse basis approach, as compared to other traditional nonlinear approaches.

  17. An updated PREDICT breast cancer prognostication and treatment benefit prediction model with independent validation.

    Science.gov (United States)

    Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P

    2017-05-22

    PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age

  18. Application of a Cloud Model-Set Pair Analysis in Hazard Assessment for Biomass Gasification Stations.

    Science.gov (United States)

    Yan, Fang; Xu, Kaili

    2017-01-01

    Because a biomass gasification station includes various hazard factors, hazard assessment is needed and significant. In this article, the cloud model (CM) is employed to improve set pair analysis (SPA), and a novel hazard assessment method for a biomass gasification station is proposed based on the cloud model-set pair analysis (CM-SPA). In this method, cloud weight is proposed to be the weight of index. In contrast to the index weight of other methods, cloud weight is shown by cloud descriptors; hence, the randomness and fuzziness of cloud weight will make it effective to reflect the linguistic variables of experts. Then, the cloud connection degree (CCD) is proposed to replace the connection degree (CD); the calculation algorithm of CCD is also worked out. By utilizing the CCD, the hazard assessment results are shown by some normal clouds, and the normal clouds are reflected by cloud descriptors; meanwhile, the hazard grade is confirmed by analyzing the cloud descriptors. After that, two biomass gasification stations undergo hazard assessment via CM-SPA and AHP based SPA, respectively. The comparison of assessment results illustrates that the CM-SPA is suitable and effective for the hazard assessment of a biomass gasification station and that CM-SPA will make the assessment results more reasonable and scientific.

  19. Flood Hazard Mapping using Hydraulic Model and GIS: A Case Study in Mandalay City, Myanmar

    Directory of Open Access Journals (Sweden)

    Kyu Kyu Sein

    2016-01-01

    Full Text Available This paper presents the use of flood frequency analysis integrating with 1D Hydraulic model (HECRAS and Geographic Information System (GIS to prepare flood hazard maps of different return periods in Ayeyarwady River at Mandalay City in Myanmar. Gumbel’s distribution was used to calculate the flood peak of different return periods, namely, 10 years, 20 years, 50 years, and 100 years. The flood peak from frequency analysis were input into HEC-RAS model to find the corresponding flood level and extents in the study area. The model results were used in integrating with ArcGIS to generate flood plain maps. Flood depths and extents have been identified through flood plain maps. Analysis of 100 years return period flood plain map indicated that 157.88 km2 with the percentage of 17.54% is likely to be inundated. The predicted flood depth ranges varies from greater than 0 to 24 m in the flood plains and on the river. The range between 3 to 5 m were identified in the urban area of Chanayetharzan, Patheingyi, and Amarapua Townships. The highest inundated area was 85 km2 in the Amarapura Township.

  20. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  1. Flexible parametric modelling of cause-specific hazards to estimate cumulative incidence functions

    Science.gov (United States)

    2013-01-01

    Background Competing risks are a common occurrence in survival analysis. They arise when a patient is at risk of more than one mutually exclusive event, such as death from different causes, and the occurrence of one of these may prevent any other event from ever happening. Methods There are two main approaches to modelling competing risks: the first is to model the cause-specific hazards and transform these to the cumulative incidence function; the second is to model directly on a transformation of the cumulative incidence function. We focus on the first approach in this paper. This paper advocates the use of the flexible parametric survival model in this competing risk framework. Results An illustrative example on the survival of breast cancer patients has shown that the flexible parametric proportional hazards model has almost perfect agreement with the Cox proportional hazards model. However, the large epidemiological data set used here shows clear evidence of non-proportional hazards. The flexible parametric model is able to adequately account for these through the incorporation of time-dependent effects. Conclusion A key advantage of using this approach is that smooth estimates of both the cause-specific hazard rates and the cumulative incidence functions can be obtained. It is also relatively easy to incorporate time-dependent effects which are commonly seen in epidemiological studies. PMID:23384310

  2. Hybrid approaches to physiologic modeling and prediction

    Science.gov (United States)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  3. Cox Proportional Hazards Models for Modeling the Time to Onset of Decompression Sickness in Hypobaric Environments

    Science.gov (United States)

    Thompson, Laura A.; Chhikara, Raj S.; Conkin, Johnny

    2003-01-01

    In this paper we fit Cox proportional hazards models to a subset of data from the Hypobaric Decompression Sickness Databank. The data bank contains records on the time to decompression sickness (DCS) and venous gas emboli (VGE) for over 130,000 person-exposures to high altitude in chamber tests. The subset we use contains 1,321 records, with 87% censoring, and has the most recent experimental tests on DCS made available from Johnson Space Center. We build on previous analyses of this data set by considering more expanded models and more detailed model assessments specific to the Cox model. Our model - which is stratified on the quartiles of the final ambient pressure at altitude - includes the final ambient pressure at altitude as a nonlinear continuous predictor, the computed tissue partial pressure of nitrogen at altitude, and whether exercise was done at altitude. We conduct various assessments of our model, many of which are recently developed in the statistical literature, and conclude where the model needs improvement. We consider the addition of frailties to the stratified Cox model, but found that no significant gain was attained above a model that does not include frailties. Finally, we validate some of the models that we fit.

  4. SmoothHazard: An R Package for Fitting Regression Models to Interval-Censored Observations of Illness-Death Models

    Directory of Open Access Journals (Sweden)

    Célia Touraine

    2017-07-01

    Full Text Available The irreversible illness-death model describes the pathway from an initial state to an absorbing state either directly or through an intermediate state. This model is frequently used in medical applications where the intermediate state represents illness and the absorbing state represents death. In many studies, disease onset times are not known exactly. This happens for example if the disease status of a patient can only be assessed at follow-up visits. In this situation the disease onset times are interval-censored. This article presents the SmoothHazard package for R. It implements algorithms for simultaneously fitting regression models to the three transition intensities of an illness-death model where the transition times to the intermediate state may be interval-censored and all the event times can be right-censored. The package parses the individual data structure of the subjects in a data set to find the individual contributions to the likelihood. The three baseline transition intensity functions are modelled by Weibull distributions or alternatively by M -splines in a semi-parametric approach. For a given set of covariates, the estimated transition intensities can be combined into predictions of cumulative event probabilities and life expectancies.

  5. Application of physical erosion modelling to derive off-site muddy flood hazard

    Science.gov (United States)

    Annika Arevalo, Sarah; Schmidt, Jürgen

    2015-04-01

    Muddy floods are local inundation events after heavy rain storms. They occur inside watersheds before the runoff reaches a river. The sediment is eroded from agricultural fields and transported with the surface runoff into adjacent residential areas. The environment where muddy floods occur is very small scaled. The damages related to muddy floods are caused by the runoff-water (flooded houses and cellars) and the transported sediment that is deposited on infrastructure and private properties. There are a variety of factors that drive the occurrence of muddy floods. The spatial extend is rather small and the distribution is very heterogeneous. This makes the prediction of the precise locations that are endangered by muddy flooding a challenge. The aim of this investigation is to identify potential hazard areas that might suffer muddy flooding out of modelled soil erosion data. For the German state of Saxony there is a modelled map of soil erosion and particle transport available. The model applied is EROSION 3D. The spatial resolution is a 20 m raster and the conditions assumed are a 10 year rainfall event on uncovered agricultural soils. A digital landuse map is edified, containing the outer borders of potential risk elements (residential and industrial areas, streets, railroads, etc.) that can be damaged by muddy flooding. The landuse map is merged with the transported sediment map calculated with EROSION 3D. The result precisely depicts the locations where high amounts of sediments might be transported into urban areas under worst case conditions. This map was validated with observed muddy flood events that proved to coincide very well with areas predicted to have a potentially high sediment input.

  6. Global Volcano Model: progress towards an international co-ordinated network for volcanic hazard and risk

    Science.gov (United States)

    Loughlin, Susan

    2013-04-01

    GVM is a growing international collaboration that aims to create a sustainable, accessible information platform on volcanic hazard and risk. GVM is a network that aims to co-ordinate and integrate the efforts of the international volcanology community. Major international initiatives and partners such as the Smithsonian Institution - Global Volcanism Program, State University of New York at Buffalo - VHub, Earth Observatory of Singapore - WOVOdat and many others underpin GVM. Activities currently include: design and development of databases of volcano data, volcanic hazards, vulnerability and exposure with internationally agreed metadata standards; establishment of methodologies for analysis of the data (e.g. hazard and exposure indices) to inform risk assessment; development of complementary hazards models and create relevant hazards and risk assessment tools. GVM acts through establishing task forces to deliver explicit deliverables in finite periods of time. GVM has a task force to deliver a global assessment of volcanic risk for UN ISDR, a task force for indices, and a task force for volcano deformation from satellite observations. GVM is organising a Volcano Best Practices workshop in 2013. A recent product of GVM is a global database on large magnitude explosive eruptions. There is ongoing work to develop databases on debris avalanches, lava dome hazards and ash hazard. GVM aims to develop the capability to anticipate future volcanism and its consequences.

  7. Three multimedia models used at hazardous and radioactive waste sites

    Energy Technology Data Exchange (ETDEWEB)

    Moskowitz, P.D.; Pardi, R.; Fthenakis, V.M.; Holtzman, S.; Sun, L.C. [Brookhaven National Lab., Upton, NY (United States); Rambaugh, J.O.; Potter, S. [Geraghty and Miller, Inc., Plainview, NY (United States)

    1996-02-01

    Multimedia models are used commonly in the initial phases of the remediation process where technical interest is focused on determining the relative importance of various exposure pathways. This report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. This study focused on three specific models MEPAS Version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. These models evaluate the transport and fate of contaminants from source to receptor through more than a single pathway. The presence of radioactive and mixed wastes at a site poses special problems. Hence, in this report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted. This report begins with a brief introduction to the concept of multimedia modeling, followed by an overview of the three models. The remaining chapters present more technical discussions of the issues associated with each compartment and their direct application to the specific models. In these analyses, the following components are discussed: source term; air transport; ground water transport; overland flow, runoff, and surface water transport; food chain modeling; exposure assessment; dosimetry/risk assessment; uncertainty; default parameters. The report concludes with a description of evolving updates to the model; these descriptions were provided by the model developers.

  8. Posterior Predictive Bayesian Phylogenetic Model Selection

    Science.gov (United States)

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  9. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  10. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  11. Comparative Study of Bancruptcy Prediction Models

    OpenAIRE

    Isye Arieshanti; Yudhi Purwananto; Ariestia Ramadhani; Mohamat Ulin Nuha; Nurissaidah Ulinnuha

    2013-01-01

    Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specificall...

  12. Modeling contractor and company employee behavior in high hazard operation

    NARCIS (Netherlands)

    Lin, P.H.; Hanea, D.; Ale, B.J.M.

    2013-01-01

    The recent blow-out and subsequent environmental disaster in the Gulf of Mexico have highlighted a number of serious problems in scientific thinking about safety. Risk models have generally concentrated on technical failures, which are easier to model and for which there are more concrete data.

  13. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    Science.gov (United States)

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  14. Development and Analysis of a Hurricane Hazard Model for Disaster Risk Assessment in Central America

    Science.gov (United States)

    Pita, G. L.; Gunasekera, R.; Ishizawa, O. A.

    2014-12-01

    Hurricane and tropical storm activity in Central America has consistently caused over the past decades thousands of casualties, significant population displacement, and substantial property and infrastructure losses. As a component to estimate future potential losses, we present a new regional probabilistic hurricane hazard model for Central America. Currently, there are very few openly available hurricane hazard models for Central America. This resultant hazard model would be used in conjunction with exposure and vulnerability components as part of a World Bank project to create country disaster risk profiles that will assist to improve risk estimation and provide decision makers with better tools to quantify disaster risk. This paper describes the hazard model methodology which involves the development of a wind field model that simulates the gust speeds at terrain height at a fine resolution. The HURDAT dataset has been used in this study to create synthetic events that assess average hurricane landfall angles and their variability at each location. The hazard model also then estimates the average track angle at multiple geographical locations in order to provide a realistic range of possible hurricane paths that will be used for risk analyses in all the Central-American countries. This probabilistic hurricane hazard model is then also useful for relating synthetic wind estimates to loss and damage data to develop and calibrate existing empirical building vulnerability curves. To assess the accuracy and applicability, modeled results are evaluated against historical events, their tracks and wind fields. Deeper analyses of results are also presented with a special reference to Guatemala. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the

  15. A Global Model for Bankruptcy Prediction.

    Directory of Open Access Journals (Sweden)

    David Alaminos

    Full Text Available The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy.

  16. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest...... to use the one-step-ahead prediction-error maximum-likelihood (or maximum a posteriori) estimator. It gives consistent estimates of all parameters and the parameter estimates are almost identical to the estimates obtained for long prediction horizons but with consumption of significantly less...

  17. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  18. Robustness aspects of Model Predictive Control

    OpenAIRE

    Megías Jiménez, David

    2011-01-01

    Model, Model-based or Receding-horizon Predictive Control (MPC or RHPC) is a successful and mature control strategy which has gained the widespread acceptance of both academia and industry. The basis of these control laws, which have been reported to handle quite complex dynamics, is to perform predictions of the system to be controlled by means of a model. A control profile is then computed to minimise some cost function defined in terms of the predictions and the hypothesised controls. It w...

  19. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  20. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms...... 2 provides linear dynamical models of Smart Grid units: Electric Vehicles, buildings with heat pumps, refrigeration systems, solar collectors, heat storage tanks, power plants, and wind farms. The models can be realized as discrete time state space models that fit into a predictive control system...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  1. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  2. Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals

    DEFF Research Database (Denmark)

    Li, Jianing; Scheike, Thomas; Zhang, Mei Jie

    2015-01-01

    Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter...

  3. DEM resolution effects on shallow landslide hazard and soil redistribution modelling

    NARCIS (Netherlands)

    Claessens, L.F.G.; Heuvelink, G.B.M.; Schoorl, J.M.; Veldkamp, A.

    2005-01-01

    In this paper we analyse the effects of digital elevation model (DEM) resolution on the results of a model that simulates spatially explicit relative shallow landslide hazard and soil redistribution patterns and quantities. We analyse distributions of slope, specific catchment area and relative

  4. Identification, prediction and mitigation of sinkhole hazards in evaporite karst areas

    OpenAIRE

    F. Gutiérrez; Cooper, Anthony; Johnson, Kenneth

    2008-01-01

    Abstract Sinkholes usually have a higher probability of occurrence and a greater genetic diversity in evaporite terrains than in carbonate karst areas. This is because evaporites have a higher solubility, and commonly a lower mechanical strength. Subsidence damage resulting from evaporite dissolution generates substantial losses throughout the world, but the causes are only well-understood in a few areas. To deal with these hazards, a phased approach is needed for sinkhole identification, inv...

  5. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  6. Can a video-based hazard perception test used for driver licensing predict crash involvement?

    Science.gov (United States)

    Horswill, Mark S; Hill, Andrew; Wetton, Mark

    2015-09-01

    In 2008, the state of Queensland in Australia introduced a video-based hazard perception test as part of the licensing process for new drivers. A key validity check for such a test is whether scores are associated with crash involvement. We present data demonstrating that drivers who failed the hazard perception test (based on a ROC curve-derived pass mark) were 25% [95% confidence interval (CI) 6%, 48%] more likely to be involved in an active crash (defined as a crash occurring while the driver's vehicle was moving but they were not engaged in parking or reversing) during a one year period following the test (controlling for driving exposure, age, and sex). Failing drivers were also 17% (95% CI 6%, 29%) more likely to have been involved in active crashes prior to the test, in the period since obtaining their provisional license. These data support the proposal that the hazard perception test is a valid measure of crash-related driving performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Induction and pruning of classification rules for prediction of microseismic hazards in coal mines

    Energy Technology Data Exchange (ETDEWEB)

    Sikora, M. [Silesian Technical University, Gliwice (Poland)

    2011-06-15

    The paper presents results of application of a rule induction and pruning algorithm for classification of a microseismic hazard state in coal mines. Due to imbalanced distribution of examples describing states 'hazardous' and 'safe', the special algorithm was used for induction and rule pruning. The algorithm selects optimal parameters' values influencing rule induction and pruning based on training and tuning sets. A rule quality measure which decides about a form and classification abilities of rules that are induced is the basic parameter of the algorithm. The specificity and sensitivity of a classifier were used to evaluate its quality. Conducted tests show that the admitted method of rules induction and classifier's quality evaluation enables to get better results of classification of microseismic hazards than by methods currently used in mining practice. Results obtained by the rules-based classifier were also compared with results got by a decision tree induction algorithm and by a neuro-fuzzy system.

  8. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    Science.gov (United States)

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  9. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  10. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  16. Proposal for a probabilistic local level landslide hazard assessment model: The case of Suluktu, Kyrgyzstan

    Science.gov (United States)

    Vidar Vangelsten, Bjørn; Fornes, Petter; Cepeda, Jose Mauricio; Ekseth, Kristine Helene; Eidsvig, Unni; Ormukov, Cholponbek

    2015-04-01

    Landslides are a significant threat to human life and the built environment in many parts of Central Asia. To improve understanding of the magnitude of the threat and propose appropriate risk mitigation measures, landslide hazard mapping is needed both at regional and local level. Many different approaches for landslide hazard mapping exist depending on the scale and purpose of the analysis and what input data are available. This paper presents a probabilistic local scale landslide hazard mapping methodology for rainfall triggered landslides, adapted to the relatively dry climate found in South-Western Kyrgyzstan. The GIS based approach makes use of data on topography, geology, land use and soil characteristics to assess landslide susceptibility. Together with a selected rainfall scenario, these data are inserted into a triggering model based on an infinite slope formulation considering pore pressure and suction effects for unsaturated soils. A statistical model based on local landslide data has been developed to estimate landslide run-out. The model links the spatial extension of the landslide to land use and geological features. The model is tested and validated for the town of Suluktu in the Ferghana Valley in South-West Kyrgyzstan. Landslide hazard is estimated for the urban area and the surrounding hillsides. The case makes use of a range of data from different sources, both remote sensing data and in-situ data. Public global data sources are mixed with case specific data obtained from field work. The different data and models have various degrees of uncertainty. To account for this, the hazard model has been inserted into a Monte Carlo simulation framework to produce a probabilistic landslide hazard map identifying areas with high landslide exposure. The research leading to these results has received funding from the European Commission's Seventh Framework Programme [FP7/2007-2013], under grant agreement n° 312972 "Framework to integrate Space-based and in

  17. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  18. Beam propagation model for the hazard evaluation of Gaussian laser beams

    Energy Technology Data Exchange (ETDEWEB)

    Schulmeister, K.; Althaus, S.; Grabner, U.; Vees, G. [ARC Seibersdorf Research (Austria)

    2004-07-01

    A beam propagation model was developed to calculate the most hazardous position, the angular subtense of the apparent source, and the power that passes through a 7 mm at that position for a Gaussian laser beam. The results for the thermal retinal hazard are discussed and it is shown that for most cases, at the most hazardous position, the beam waist can be treated as the apparent source. A number of distinctive regions of beam waist diameter and divergence values could be identified and the background of the results of the model could be explained and approximated well by simple formulas. Since the angular subtense of the beam waist (and therefore of the apparent source) decreases with increasing distance to the beam waist, the most hazardous position may not be at 10 cm from the beam waist but some distance away. Consequently, when the results of the model are compared with the accessible emission level and the emission limit that would be determined at 10 cm from the beam waist, as is currently implied in the international laser safety standard IEC 60825-1, then the latter would underestimate the hazard by up to a factor of 3.5. (orig.)

  19. The Prospect of using Three-Dimensional Earth Models To Improve Nuclear Explosion Monitoring and Ground Motion Hazard Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Antoun, T; Harris, D; Lay, T; Myers, S C; Pasyanos, M E; Richards, P; Rodgers, A J; Walter, W R; Zucca, J J

    2008-02-11

    The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes a path by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas.

  20. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  1. Comparison of joint modeling and landmarking for dynamic prediction under an illness-death model.

    Science.gov (United States)

    Suresh, Krithika; Taylor, Jeremy M G; Spratt, Daniel E; Daignault, Stephanie; Tsodikov, Alexander

    2017-11-01

    Dynamic prediction incorporates time-dependent marker information accrued during follow-up to improve personalized survival prediction probabilities. At any follow-up, or "landmark", time, the residual time distribution for an individual, conditional on their updated marker values, can be used to produce a dynamic prediction. To satisfy a consistency condition that links dynamic predictions at different time points, the residual time distribution must follow from a prediction function that models the joint distribution of the marker process and time to failure, such as a joint model. To circumvent the assumptions and computational burden associated with a joint model, approximate methods for dynamic prediction have been proposed. One such method is landmarking, which fits a Cox model at a sequence of landmark times, and thus is not a comprehensive probability model of the marker process and the event time. Considering an illness-death model, we derive the residual time distribution and demonstrate that the structure of the Cox model baseline hazard and covariate effects under the landmarking approach do not have simple form. We suggest some extensions of the landmark Cox model that should provide a better approximation. We compare the performance of the landmark models with joint models using simulation studies and cognitive aging data from the PAQUID study. We examine the predicted probabilities produced under both methods using data from a prostate cancer study, where metastatic clinical failure is a time-dependent covariate for predicting death following radiation therapy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A Predictive Model of Plasma Lamotrigine Levels.

    Science.gov (United States)

    Kamei, K; Terao, T; Katayama, Y; Hatano, K; Kodama, K; Shirahama, M; Sakai, A; Hirakawa, H; Mizokami, Y; Shiotsuki, I; Ishii, N; Inoue, Y

    2016-09-01

    Introduction: Lamotrigine is one of several mood stabilizers and its effects for the treatment and prevention of depressive episodes, particularly in bipolar disorder, are generally accepted. Although the findings about a therapeutic window of lamotrigine are yet to be determined, it seems important to obtain information on individual pharmacokinetic peculiarities. This study was conducted to formulate the predictive model of plasma lamotrigine levels. Methods: Using the data of 47 patients whose lamotrigine levels, liver function, and renal function were measured, predictive models of lamotrigine levels were formulated by stepwise multiple regression analyses. The predictive power of the models was compared using another dataset of 25 patients. Results: Two models were created using stepwise multiple regression. The first model was: plasma lamotrigine level (μg/mL)=2.308+0.019×lamotrigine dose (mg/day). The second model was: plasma lamotrigine level (μg/mL)=0.08+0.024×lamotrigine dose (mg/day)+4.088×valproate combination (no=0, yes=1). The predictive power of the second model was better than that of the first model. Discussion: The present study proposes a prompt and relatively accurate equation to predict lamotrigine levels. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Lava flow modelling in long and short-term hazard assessment

    Science.gov (United States)

    Martí, Joan; Becerril, Laura; Bartolini, Stefania

    2017-04-01

    Lava flows constitute the commonest volcano hazard resulting from a non-explosive eruption, especially in basaltic systems. These flows come in many shapes and sizes and have a wide range of surface morphology (pahoehoe, aa, blocky, etc.) whose differences are mainly controlled by variations in magma viscosity and supply rates at the time of the eruption. The principal constraint on lava emplacement is topography and so flows will tend to invade the lowest-lying areas. Modelling such complex non-Newtonian flows is not an easy task, as many of the parameters required to precisely define flow behaviour are not known. This is one of the reasons, in addition to the required high computing cost, for which deterministic models are not preferred when conducting long and short term hazard assessment. On the contrary, probabilistic models, despite being much less precise, offer a rapid approach to lava flow invasion and fulfil the main needs required in lava flow hazard analysis, with a much less computational demand and, consequently, offering a much wider applicability. In this contribution we analyse the main problems that exist in lava flow modelling, compare between deterministic and probabilistic models, and show the application of probabilistic models in long and short-term hazard assessment. This contribution is part of the EC ECHO SI2.695524:VeTOOLS and EPOS-IP AMD-676564-42 Grants

  4. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  5. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...

  6. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  7. Landslide Hazard Assessment and Mapping in the Guil Catchment (Queyras, Southern French Alps): From Landslide Inventory to Susceptibility Modelling

    Science.gov (United States)

    Roulleau, Louise; Bétard, François; Carlier, Benoît; Lissak, Candide; Fort, Monique

    2016-04-01

    Landslides are common natural hazards in the Southern French Alps, where they may affect human lives and cause severe damages to infrastructures. As a part of the SAMCO research project dedicated to risk evaluation in mountain areas, this study focuses on the Guil river catchment (317 km2), Queyras, to assess landslide hazard poorly studied until now. In that area, landslides are mainly occasional, low amplitude phenomena, with limited direct impacts when compared to other hazards such as floods or snow avalanches. However, when interacting with floods during extreme rainfall events, landslides may have indirect consequences of greater importance because of strong hillslope-channel connectivity along the Guil River and its tributaries (i.e. positive feedbacks). This specific morphodynamic functioning reinforces the need to have a better understanding of landslide hazards and their spatial distribution at the catchment scale to prevent local population from disasters with multi-hazard origin. The aim of this study is to produce a landslide susceptibility mapping at 1:50 000 scale as a first step towards global estimation of landslide hazard and risk. The three main methodologies used for assessing landslide susceptibility are qualitative (i.e. expert opinion), deterministic (i.e. physics-based models) and statistical methods (i.e. probabilistic models). Due to the rapid development of geographical information systems (GIS) during the last two decades, statistical methods are today widely used because they offer a greater objectivity and reproducibility at large scales. Among them, multivariate analyses are considered as the most robust techniques, especially the logistic regression method commonly used in landslide susceptibility mapping. However, this method like others is strongly dependent on the accuracy of the input data to avoid significant errors in the final results. In particular, a complete and accurate landslide inventory is required before the modelling

  8. Flexible recalibration of binary clinical prediction models.

    Science.gov (United States)

    Dalton, Jarrod E

    2013-01-30

    Calibration in binary prediction models, that is, the agreement between model predictions and observed outcomes, is an important aspect of assessing the models' utility for characterizing risk in future data. A popular technique for assessing model calibration first proposed by D. R. Cox in 1958 involves fitting a logistic model incorporating an intercept and a slope coefficient for the logit of the estimated probability of the outcome; good calibration is evident if these parameters do not appreciably differ from 0 and 1, respectively. However, in practice, the form of miscalibration may sometimes be more complicated. In this article, we expand the Cox calibration model to allow for more general parameterizations and derive a relative measure of miscalibration between two competing models from this more flexible model. We present an example implementation using data from the US Agency for Healthcare Research and Quality. Copyright © 2012 John Wiley & Sons, Ltd.

  9. Nottingham knee osteoarthritis risk prediction models.

    Science.gov (United States)

    Zhang, Weiya; McWilliams, Daniel F; Ingham, Sarah L; Doherty, Sally A; Muthuri, Stella; Muir, Kenneth R; Doherty, Michael

    2011-09-01

    (1) To develop risk prediction models for knee osteoarthritis (OA) and (2) to estimate the risk reduction that results from modification of potential risk factors. This was a 12-year retrospective cohort study undertaken in the general population in Nottingham, UK. Baseline risk factors were collected by questionnaire. Incident radiographic knee OA was defined by Kellgren and Lawrence (KL) score ≥2. Incident symptomatic knee OA was defined by KL ≥2 plus knee pain. Progression of knee OA was defined by KL ≥1 grade increase from baseline. A logistic regression model was used for prediction. Calibration and discrimination of the models were tested in the Osteoarthritis Initiative (OAI) population and Genetics of Osteoarthritis and Lifestyle (GOAL) population. ORs of the models were compared with those obtained from meta-analysis of existing literature. From a community sample of 424 people aged over 40, 3 risk prediction models were developed. These included incidence of radiographic knee OA, incidence of symptomatic knee OA and progression of knee OA. All models had good calibration and moderate discrimination power in OAI and GOAL. The ORs lied within the 95% CIs of the published studies. The risk reduction due to modifying obesity at the individual and the population levels were demonstrated. Risk prediction of knee OA based on the well established, common modifiable risk factors has been established. The models may be used to predict the risk of knee OA, and risk reduction due to preventing a specific risk factor.

  10. A Dynamic Predictive Model for Progression of CKD.

    Science.gov (United States)

    Tangri, Navdeep; Inker, Lesley A; Hiebert, Brett; Wong, Jenna; Naimark, David; Kent, David; Levey, Andrew S

    2017-04-01

    Predicting the progression of chronic kidney disease (CKD) is vital for clinical decision making and patient-provider communication. We previously developed an accurate static prediction model that used single-timepoint measurements of demographic and laboratory variables. Development of a dynamic predictive model using demographic, clinical, and time-dependent laboratory data from a cohort of patients with CKD stages 3 to 5. We studied 3,004 patients seen April 1, 2001, to December 31, 2009, in the outpatient CKD clinic of Sunnybrook Hospital in Toronto, Canada. Age, sex, and urinary albumin-creatinine ratio at baseline. Estimated glomerular filtration rate (eGFR), serum albumin, phosphorus, calcium, and bicarbonate values as time-dependent predictors. Treated kidney failure, defined by initiation of dialysis therapy or kidney transplantation. We describe a dynamic (latest-available-measurement) prediction model using time-dependent laboratory values as predictors of outcome. Our static model included all 8 candidate predictors. The latest-available-measurement model includes age and the latter 5 variables as time-dependent predictors. We used Cox proportional hazards models for time to kidney failure and compared discrimination, calibration, model fit, and net reclassification for the models. We studied 3,004 patients, who had 344 kidney failure events over a median follow-up of 3 years and an average of 5 clinic visits. eGFR was more strongly associated with kidney failure in the latest-available-measurement model versus the baseline visit static model (HR, 0.44 vs 0.65). The association of calcium level was unchanged, but male sex and phosphorus, albumin, and bicarbonate levels were no longer significant. Discrimination and goodness of fit showed incremental improvement with inclusion of time-dependent covariates (integrated discrimination improvement, 0.73%; 95% CI, 0.56%-0.90%). Our data were derived from a nephrology clinic at a single center. We were

  11. Fitting additive hazards models for case-cohort studies: a multiple imputation approach.

    Science.gov (United States)

    Jung, Jinhyouk; Harel, Ofer; Kang, Sangwook

    2016-07-30

    In this paper, we consider fitting semiparametric additive hazards models for case-cohort studies using a multiple imputation approach. In a case-cohort study, main exposure variables are measured only on some selected subjects, but other covariates are often available for the whole cohort. We consider this as a special case of a missing covariate by design. We propose to employ a popular incomplete data method, multiple imputation, for estimation of the regression parameters in additive hazards models. For imputation models, an imputation modeling procedure based on a rejection sampling is developed. A simple imputation modeling that can naturally be applied to a general missing-at-random situation is also considered and compared with the rejection sampling method via extensive simulation studies. In addition, a misspecification aspect in imputation modeling is investigated. The proposed procedures are illustrated using a cancer data example. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  13. Flood hazard mapping of Palembang City by using 2D model

    Science.gov (United States)

    Farid, Mohammad; Marlina, Ayu; Kusuma, Muhammad Syahril Badri

    2017-11-01

    Palembang as the capital city of South Sumatera Province is one of the metropolitan cities in Indonesia that flooded almost every year. Flood in the city is highly related to Musi River Basin. Based on Indonesia National Agency of Disaster Management (BNPB), the level of flood hazard is high. Many natural factors caused flood in the city such as high intensity of rainfall, inadequate drainage capacity, and also backwater flow due to spring tide. Furthermore, anthropogenic factors such as population increase, land cover/use change, and garbage problem make flood problem become worse. The objective of this study is to develop flood hazard map of Palembang City by using two dimensional model. HEC-RAS 5.0 is used as modelling tool which is verified with field observation data. There are 21 sub catchments of Musi River Basin in the flood simulation. The level of flood hazard refers to Head Regulation of BNPB number 2 in 2012 regarding general guideline of disaster risk assessment. The result for 25 year return per iod of flood shows that with 112.47 km2 area of inundation, 14 sub catchments are categorized in high hazard level. It is expected that the hazard map can be used for risk assessment.

  14. Predicting disability pension - depression as hazard: a 10 year population-based cohort study in Norway.

    Science.gov (United States)

    Lassemo, Eva; Sandanger, Inger; Nygård, Jan F; Sørgaard, Knut W

    2016-03-01

    Disability pension (DP) is an escalating challenge to individuals and the welfare state, with mental health problems as imminent hazard. The objective of the present paper was to determine if a diagnosis of depression increased the risk of subsequent DP, and whether the risk differed by gender. A population cohort of 1230 persons were diagnostically interviewed (Composite International Diagnostic Interview, CIDI) in a population study examining mental health, linked to the DP registry and followed for 10 years. The risk for DP following depression was estimated using Cox regression. Life-time depression, as well as current depression, increased the risk of subsequent DP for both genders. The fully adjusted [baseline health, health behavior and socio-economic status (SES)] hazard ratios (HRs) for life-time depressed men and women were 2.9 [95% confidence interval (CI) 1.5-5.8] and 1.6 (95% CI 1.0-2.5) respectively. Men were significantly older at time of DP. There are reasons to believe that depression went under-recognized and under-treated. To augment knowledge in the field, without underestimating depression as risk for DP, a deeper understanding of the nature and effects of other distress is needed. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Survival model construction guided by fit and predictive strength.

    Science.gov (United States)

    Chauvel, Cécile; O'Quigley, John

    2017-06-01

    Survival model construction can be guided by goodness-of-fit techniques as well as measures of predictive strength. Here, we aim to bring together these distinct techniques within the context of a single framework. The goal is how to best characterize and code the effects of the variables, in particular time dependencies, when taken either singly or in combination with other related covariates. Simple graphical techniques can provide an immediate visual indication as to the goodness-of-fit but, in cases of departure from model assumptions, will point in the direction of a more involved and richer alternative model. These techniques appear to be intuitive. This intuition is backed up by formal theorems that underlie the process of building richer models from simpler ones. Measures of predictive strength are used in conjunction with these goodness-of-fit techniques and, again, formal theorems show that these measures can be used to help identify models closest to the unknown non-proportional hazards mechanism that we can suppose generates the observations. Illustrations from studies in breast cancer show how these tools can be of help in guiding the practical problem of efficient model construction for survival data. © 2016, The International Biometric Society.

  16. Modelling clustering of natural hazard phenomena and the effect on re/insurance loss perspectives

    Science.gov (United States)

    Khare, S.; Bonazzi, A.; Mitas, C.; Jewson, S.

    2015-06-01

    In this paper, we present a conceptual framework for modelling clustered natural hazards that makes use of historical event data as a starting point. We review a methodology for modelling clustered natural hazard processes called Poisson mixtures. This methodology is suited to the application we have in mind as it naturally models processes that yield cross-event correlation (unlike homogeneous Poisson models), has a high degree of tunability to the problem at hand and is analytically tractable. Using European windstorm data as an example, we provide evidence that the historical data show strong evidence of clustering. We then develop Poisson and Clustered simulation models for the data, demonstrating clearly the superiority of the Clustered model which we have implemented using the Poisson mixture approach. We then discuss the implications of including clustering in models of prices of catXL contracts, one of the most commonly used mechanisms for transferring risk between primary insurers and reinsurers. This paper provides a number of unique insights into the impact clustering has on modelled catXL contract prices. The simple modelling example in this paper provides a clear and insightful starting point for practitioners tackling more complex natural hazard risk problems.

  17. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest.

    Science.gov (United States)

    Suh, Jangwon; Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon

    2017-11-27

    In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further.

  18. Regression models for predicting anthropometric measurements of ...

    African Journals Online (AJOL)

    ... System (ANFIS) was employed to select the two most influential of the five input measurements. This search was separately conducted for each of the output measurements. Regression models were developed from the collected anthropometric data. Also, the predictive performance of these models was examined using ...

  19. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  20. The Non-Parametric Identification of the Mixed Proportional Hazards Competing Risks Model

    NARCIS (Netherlands)

    Abbring, J.H.; Berg, van den G.J.

    2000-01-01

    We prove identification of dependent competing risks models in whicheach risk has a mixed proportional hazard specification with regressors, and the risks are dependent by way of the unobserved heterogeneity, or frailty, components. We show that the conditions for non-parametric identification given

  1. Analyzing multivariate survival data using composite likelihood and flexible parametric modeling of the hazard functions

    DEFF Research Database (Denmark)

    Nielsen, Jan; Parner, Erik

    2010-01-01

    In this paper, we model multivariate time-to-event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right- and interval-censored data are considered. The suggested approach is applied on two types of family studies using the gamma...

  2. Independent screening for single-index hazard rate models with ultrahigh dimensional features

    DEFF Research Database (Denmark)

    Gorst-Rasmussen, Anders; Scheike, Thomas

    2013-01-01

    can be viewed as the natural survival equivalent of correlation screening. We state conditions under which the method admits the sure screening property within a class of single-index hazard rate models with ultrahigh dimensional features and describe the generally detrimental effect of censoring...

  3. An estimating equation for parametric shared frailty models with marginal additive hazards

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Martinussen, Torben

    2004-01-01

    frailty models with marginal additive hazards by using the Lin and Ying estimators. We give the large sample properties of the estimators arising from these estimating equations and investigate their small sample properties by Monte Carlo simulation. A real example is provided for illustration...

  4. Combining computational models for landslide hazard assessment of Guantánamo province, Cuba

    NARCIS (Netherlands)

    Castellanos Abella, E.A.

    2008-01-01

    As part of the Cuban system for landslide disaster management, a methodology was developed for regional scale landslide hazard assessment, which is a combination of different models. The method was applied in Guantánamo province at 1:100 000 scale. The analysis started with an extensive aerial

  5. Using H/V spectral ratios to constrain 1-D subsurface models for seismic hazard assessment

    Science.gov (United States)

    Shapira, A.; Zaslavsky, Y.

    2003-04-01

    In recent years, considerable research has been focused on establishing reliable methods to predict earthquake ground motions for seismic hazard assessment. The seismic motions are significantly affected by the soil layers at the site and by the impedance ratio between surficial and underlying deposits. These yield frequency selective amplification effects that are important parameters in the process of earthquake resistance design of buildings and in the process of preparing earthquake damage scenarios. Numerical methods for estimating site effects require modeling of the subsurface, primarily shear-wave velocities of the sedimentary layers and underlying rock and thickness of each layer. In many cases, it is difficult to construct such models by only using conventional geophysical methods and borehole information, especially with regard to the deeper sediments. We have encountered such difficulties at several sites for bridge design in Israel, located along or near the seismically active Dead Sea transform. There, and in many other places we found it very useful to constrain the subsurface models by considering site response functions evaluated by using the H/V spectral ratio techniques. A number of bridge construction sites where instrumented with three-component seismometers. We evaluated the empirical site response function from H/V spectral rations of weak motions from local and regional earthquakes and measurements of ambient noise. The average spectral ratio estimated for soil sites showed amplification factor up to 5 in the frequency range of 0.4 to 0.8 Hz. Regional geology data, S-wave refraction surveys in different areas for similar geological units and borehole information were used to construct 1D subsurface model for each site from which an analytical site response function is calculated. The uncertainty associated with the proposed subsurface model models yield a too high variability between the analytical site response functions. Hence, we found it

  6. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  7. Predicting the impact from significant storm events on a hazardous waste site

    Energy Technology Data Exchange (ETDEWEB)

    Singh, U.P. [CH2M Hill, Oakland, CA (United States); Dixon, N.P. [CH2M Hill, Redding, CA (United States); Mitchell, J.S. [CH2M Hill, Helena, MT (United States)

    1994-12-31

    The Stringfellow Hazardous Waste Site is a former Class 1 industrial waste disposal facility located near the community of Glen Avon in southern California. In response to community concerns regarding flooding and possible exposure to contaminants via the surface water pathway, a study was performed to evaluate the potential effect significant/episodic storm events may have on the site and its engineered structures as they exist during present day conditions. Specific storm events such as significant recorded historic storms as well as synthetic design storms were considered and the impact on the onsite area and surface channels in Pyrite Canyon downstream of the site was evaluated. Conclusions were reached, and recommendations were made to minimize the potential flood impacts and exposure to contaminants via the surface water pathway in the areas downstream of the site.

  8. Global river flood hazard maps: hydraulic modelling methods and appropriate uses

    Science.gov (United States)

    Townend, Samuel; Smith, Helen; Molloy, James

    2014-05-01

    Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some

  9. Turning the rumor of the May 11, 2011, earthquake prediction in Rome, Italy, into an information day on earthquake hazard

    Directory of Open Access Journals (Sweden)

    Concetta Nostro

    2012-07-01

    Full Text Available A devastating earthquake was predicted to hit Rome on May 11, 2011. This prediction was never officially released, but it grew on the internet and was amplified by the media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions and related them to earthquakes. Indeed, around May 11, 2011, there was a planetary alignment, and this fed the credibility of the earthquake prediction. During the months preceding May 2011, the Istituto Nazionale di Geofisica e Vulcanologia (INGV was overwhelmed with requests for information about this prediction, by the inhabitants of Rome and by tourists. Given the echo of this earthquake prediction, on May 11, 2011, the INGV decided to organize an Open Day at its headquarters in Rome, to inform the public about Italian seismicity and earthquake physics. The Open Day was preceded by a press conference two days before, to talk with journalists about this prediction, and to present the Open Day. During this ‘Day’, 13 new videos were also posted on our YouTube/INGVterremoti channel to explain earthquake processes and hazards, and to provide periodic updates on seismicity in Italy from the seismicity monitoring room. On May 11, 2011, the INGV headquarters was peacefully invaded by over 3,000 visitors, from 10:00 am to 9:00 pm: families, students with and without teachers, civil protection groups, and many journalists. This initiative that was built up in a few weeks has had very large feedback, and was a great opportunity to talk with journalists and people about earthquake prediction, and more in general about the seismic risk in Italy.

  10. Flood Hazard Mapping Combining Hydrodynamic Modeling and Multi Annual Remote Sensing data

    Directory of Open Access Journals (Sweden)

    Laura Giustarini

    2015-10-01

    Full Text Available This paper explores a method to combine the time and space continuity of a large-scale inundation model with discontinuous satellite microwave observations, for high-resolution flood hazard mapping. The assumption behind this approach is that hydraulic variables computed from continuous spatially-distributed hydrodynamic modeling and observed as discrete satellite-derived flood extents are correlated in time, so that probabilities can be transferred from the model series to the observations. A prerequisite is, therefore, the existence of a significant correlation between a modeled variable (i.e., flood extent or volume and the synchronously-observed flood extent. If this is the case, the availability of model simulations over a long time period allows for a robust estimate of non-exceedance probabilities that can be attributed to corresponding synchronously-available satellite observations. The generated flood hazard map has a spatial resolution equal to that of the satellite images, which is higher than that of currently available large scale inundation models. The method was applied on the Severn River (UK, using the outputs of a global inundation model provided by the European Centre for Medium-range Weather Forecasts and a large collection of ENVISAT ASAR imagery. A comparison between the hazard map obtained with the proposed method and with a more traditional numerical modeling approach supports the hypothesis that combining model results and satellite observations could provide advantages for high-resolution flood hazard mapping, provided that a sufficient number of remote sensing images is available and that a time correlation is present between variables derived from a global model and obtained from satellite observations.

  11. Predictive modeling of a radiative shock system

    Energy Technology Data Exchange (ETDEWEB)

    Holloway, James Paul, E-mail: hagar@umich.edu [Department of Nuclear Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Bingham, Derek [Statistics and Actuarial Science, Simon Fraser University, Burnaby, BC, V5A 1S6 (Canada); Chou, Chuan-Chih; Doss, Forrest; Paul Drake, R.; Fryxell, Bruce; Grosskopf, Michael; Holst, Bart van der [Atmospheric Oceanic and Space Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Mallick, Bani K. [Department of Statistics, Texas A and M University, College Station, TX 77843-3143 (United States); McClarren, Ryan [Institute for Applied Mathematics and Computational Science, Texas A and M University, College Station, TX 77843-3133 (United States); Mukherjee, Ashin; Nair, Vijay [Department of Statistics, University of Michigan, Ann Arbor, MI 48109 (United States); Powell, Kenneth G. [Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Ryu, D. [Department of Statistics, Texas A and M University, College Station, TX 77843-3143 (United States); Sokolov, Igor; Toth, Gabor [Atmospheric Oceanic and Space Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Zhang Zhanyang [Department of Statistics, University of Michigan, Ann Arbor, MI 48109 (United States)

    2011-09-15

    A predictive model is constructed for a radiative shock experiment, using a combination of a physics code and experimental measurements. The CRASH code can model the radiation hydrodynamics of the radiative shock launched by the ablation of a Be drive disk and driven down a tube filled with Xe. The code is initialized by a preprocessor that uses data from the Hyades code to model the initial 1.3 ns of the system evolution, with this data fit over seven input parameters by a Gaussian process model. The CRASH code output for shock location from 320 simulations is modeled by another Gaussian process model that combines the simulation data with eight field measurements of a CRASH experiment, and uses this joint model to construct a posterior distribution for the physical parameters of the simulation (model calibration). This model can then be used to explore sensitivity of the system to the input parameters. Comparison of the predicted shock locations in a set of leave-one-out exercises shows that the calibrated model can predict the shock location within experimental uncertainty.

  12. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  13. Beam propagation model for the laser hazard evaluations including optical instruments

    Energy Technology Data Exchange (ETDEWEB)

    Schulmeister, K.; Althaus, S.; Grabner, U.; Vees, G. [ARC Seibersdorf Research GmbH (Austria)

    2004-07-01

    A beam propagation model was developed to calculate the most hazardous position, the angular subtense of the apparent source, and the power that passes through a 7 mm representing the pupil of the eye for a Gaussian laser beam including the usage of magnifiers and telescopes. The results for the thermal retinal hazard are discussed and it is shown that for the telescope for most cases, at the most hazardous position, the beam waist can be treated as the apparent source, where the dependence and the regions are equivalent to those of the naked eye discussed elsewhere. For magnifying glasses, the beam is transformed in terms of beam waist diameter and divergence and consequently the dependencies and regions are also flipped. (orig.)

  14. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    OpenAIRE

    Silva, R.G.; W. H. KWONG

    1999-01-01

    A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP) problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in...

  15. Using the RBFN model and GIS technique to assess wind erosion hazards of Inner Mongolia, China

    Science.gov (United States)

    Shi, Huading; Liu, Jiyuan; Zhuang, Dafang; Hu, Yunfeng

    2006-08-01

    Soil wind erosion is the primary process and the main driving force for land desertification and sand-dust storms in arid and semi-arid areas of Northern China. Many researchers have paid more attention to this issue. This paper select Inner Mongolia autonomous region as the research area, quantify the various indicators affecting the soil wind erosion, using the GIS technology to extract the spatial data, and construct the RBFN (Radial Basis Function Network) model for assessment of wind erosion hazard. After training the sample data of the different levels of wind erosion hazard, we get the parameters of the model, and then assess the wind erosion hazard. The result shows that in the Southern parts of Inner Mongolia wind erosion hazard are very severe, counties in the middle regions of Inner Mongolia vary from moderate to severe, and in eastern are slight. The comparison of the result with other researches shows that the result is in conformity with actual conditions, proving the reasonability and applicability of the RBFN model.

  16. An interoceptive predictive coding model of conscious presence

    National Research Council Canada - National Science Library

    Seth, Anil K; Suzuki, Keisuke; Critchley, Hugo D

    2011-01-01

    .... The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role...

  17. An Interoceptive Predictive Coding Model of Conscious Presence

    National Research Council Canada - National Science Library

    Seth, Anil K; Suzuki, Keisuke; Critchley, Hugo D

    2012-01-01

    .... The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signalling in cortex, the role...

  18. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  19. Advances in National Capabilities for Consequence Assessment Modeling of Airborne Hazards

    Energy Technology Data Exchange (ETDEWEB)

    Nasstrom, J; Sugiyama, G; Foster, K; Larsen, S; Kosovic, B; Eme, B; Walker, H; Goldstein, P; Lundquist, J; Pobanz, B; Fulton, J

    2007-11-26

    This paper describes ongoing advancement of airborne hazard modeling capabilities in support of multiple agencies through the National Atmospheric Release Advisory Center (NARAC) and the Interagency Atmospheric Modeling and Atmospheric Assessment Center (IMAAC). A suite of software tools developed by Lawrence Livermore National Laboratory (LLNL) and collaborating organizations includes simple stand-alone, local-scale plume modeling tools for end user's computers, Web- and Internet-based software to access advanced 3-D flow and atmospheric dispersion modeling tools and expert analysis from the national center at LLNL, and state-of-the-science high-resolution urban models and event reconstruction capabilities.

  20. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  1. New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food.

    Science.gov (United States)

    Ihekwaba, Adaoha E C; Mura, Ivan; Malakar, Pradeep K; Walshaw, John; Peck, Michael W; Barker, G C

    2015-09-08

    Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making. Copyright © 2015 Ihekwaba et al.

  2. Coupling of rainfall-induced landslide triggering model with predictions of debris flow runout distances

    Science.gov (United States)

    Lehmann, Peter; von Ruette, Jonas; Fan, Linfeng; Or, Dani

    2014-05-01

    Rapid debris flows initiated by rainfall induced shallow landslides present a highly destructive natural hazard in steep terrain. The impact and run-out paths of debris flows depend on the volume, composition and initiation zone of released material and are requirements to make accurate debris flow predictions and hazard maps. For that purpose we couple the mechanistic 'Catchment-scale Hydro-mechanical Landslide Triggering (CHLT)' model to compute timing, location, and landslide volume with simple approaches to estimate debris flow runout distances. The runout models were tested using two landslide inventories obtained in the Swiss Alps following prolonged rainfall events. The predicted runout distances were in good agreement with observations, confirming the utility of such simple models for landscape scale estimates. In a next step debris flow paths were computed for landslides predicted with the CHLT model for a certain range of soil properties to explore its effect on runout distances. This combined approach offers a more complete spatial picture of shallow landslide and subsequent debris flow hazards. The additional information provided by CHLT model concerning location, shape, soil type and water content of the released mass may also be incorporated into more advanced models of runout to improve predictability and impact of such abruptly-released mass.

  3. Multiscale Modeling of Angiogenesis and Predictive Capacity

    Science.gov (United States)

    Pillay, Samara; Byrne, Helen; Maini, Philip

    Tumors induce the growth of new blood vessels from existing vasculature through angiogenesis. Using an agent-based approach, we model the behavior of individual endothelial cells during angiogenesis. We incorporate crowding effects through volume exclusion, motility of cells through biased random walks, and include birth and death-like processes. We use the transition probabilities associated with the discrete model and a discrete conservation equation for cell occupancy to determine collective cell behavior, in terms of partial differential equations (PDEs). We derive three PDE models incorporating single, multi-species and no volume exclusion. By fitting the parameters in our PDE models and other well-established continuum models to agent-based simulations during a specific time period, and then comparing the outputs from the PDE models and agent-based model at later times, we aim to determine how well the PDE models predict the future behavior of the agent-based model. We also determine whether predictions differ across PDE models and the significance of those differences. This may impact drug development strategies based on PDE models.

  4. A Mathematical Model for the Industrial Hazardous Waste Location-Routing Problem

    Directory of Open Access Journals (Sweden)

    Omid Boyer

    2013-01-01

    Full Text Available Technology progress is a cause of industrial hazardous wastes increasing in the whole world . Management of hazardous waste is a significant issue due to the imposed risk on environment and human life. This risk can be a result of location of undesirable facilities and also routing hazardous waste. In this paper a biobjective mixed integer programing model for location-routing industrial hazardous waste with two objectives is developed. First objective is total cost minimization including transportation cost, operation cost, initial investment cost, and cost saving from selling recycled waste. Second objective is minimization of transportation risk. Risk of population exposure within bandwidth along route is used to measure transportation risk. This model can help decision makers to locate treatment, recycling, and disposal centers simultaneously and also to route waste between these facilities considering risk and cost criteria. The results of the solved problem prove conflict between two objectives. Hence, it is possible to decrease the cost value by marginally increasing the transportation risk value and vice versa. A weighted sum method is utilized to combine two objectives function into one objective function. To solve the problem GAMS software with CPLEX solver is used. The problem is applied in Markazi province in Iran.

  5. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  6. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  7. Predicting future glacial lakes in Austria using different modelling approaches

    Science.gov (United States)

    Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers

  8. Climate Models have Accurately Predicted Global Warming

    Science.gov (United States)

    Nuccitelli, D. A.

    2016-12-01

    Climate model projections of global temperature changes over the past five decades have proven remarkably accurate, and yet the myth that climate models are inaccurate or unreliable has formed the basis of many arguments denying anthropogenic global warming and the risks it poses to the climate system. Here we compare average global temperature predictions made by both mainstream climate scientists using climate models, and by contrarians using less physically-based methods. We also explore the basis of the myth by examining specific arguments against climate model accuracy and their common characteristics of science denial.

  9. An Uncertain Wage Contract Model with Adverse Selection and Moral Hazard

    Directory of Open Access Journals (Sweden)

    Xiulan Wang

    2014-01-01

    it can be characterized as an uncertain variable. Moreover, the employee's effort is unobservable to the employer, and the employee can select her effort level to maximize her utility. Thus, an uncertain wage contract model with adverse selection and moral hazard is established to maximize the employer's expected profit. And the model analysis mainly focuses on the equivalent form of the proposed wage contract model and the optimal solution to this form. The optimal solution indicates that both the employee's effort level and the wage increase with the employee's ability. Lastly, a numerical example is given to illustrate the effectiveness of the proposed model.

  10. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....

  11. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    This paper investigates the prediction of residual stresses developed in shielded manual metal arc welding of mild steel plates through Finite Element Model simulation and experiments. The existence of residual stresses that cause fatigue and distortion in welded structures has been responsible for failure of machine parts ...

  12. Model Predictive Control Fundamentals | Orukpe | Nigerian Journal ...

    African Journals Online (AJOL)

    Model Predictive Control (MPC) has developed considerably over the last two decades, both within the research control community and in industries. ... In this paper, we will present an introduction to the theory and application of MPC with Matlab codes written to simulate an example of a randomly generated system.

  13. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  14. Model predictive control of smart microgrids

    DEFF Research Database (Denmark)

    Hu, Jiefeng; Zhu, Jianguo; Guerrero, Josep M.

    2014-01-01

    required to realise high-performance of distributed generations and will realise innovative control techniques utilising model predictive control (MPC) to assist in coordinating the plethora of generation and load combinations, thus enable the effective exploitation of the clean renewable energy sources...

  15. Predictive modeling in homogeneous catalysis: a tutorial

    NARCIS (Netherlands)

    Maldonado, A.G.; Rothenberg, G.

    2010-01-01

    Predictive modeling has become a practical research tool in homogeneous catalysis. It can help to pinpoint ‘good regions’ in the catalyst space, narrowing the search for the optimal catalyst for a given reaction. Just like any other new idea, in silico catalyst optimization is accepted by some

  16. Prediction modelling for population conviction data

    NARCIS (Netherlands)

    Tollenaar, N.

    2017-01-01

    In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.

  17. DEVELOPING PREDICTIVE MODELS OF INTERNET SERVICE STRATEGIES

    Directory of Open Access Journals (Sweden)

    Maxim Yu. Khramov

    2015-01-01

    Full Text Available Issues related to strategic managementof interactive online services as well as difficulties in predicting results of strategies’ implementation were studied in the article. As a result methodological andtechnological solutions were workedout; the solutions are based on usage ofsimulation and particularly based on thecombination of such methods as systemdynamics and agent-based modeling.

  18. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts

  19. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...

  20. Review of Sand Production Prediction Models

    Directory of Open Access Journals (Sweden)

    Hossein Rahmati

    2013-01-01

    Full Text Available Sand production in oil and gas wells can occur if fluid flow exceeds a certain threshold governed by factors such as consistency of the reservoir rock, stress state and the type of completion used around the well. The amount of solids can be less than a few grams per cubic meter of reservoir fluid, posing only minor problems, or a substantial amount over a short period of time, resulting in erosion and in some cases filling and blocking of the wellbore. This paper provides a review of selected approaches and models that have been developed for sanding prediction. Most of these models are based on the continuum assumption, while a few have recently been developed based on discrete element model. Some models are only capable of assessing the conditions that lead to the onset of sanding, while others are capable of making volumetric predictions. Some models use analytical formulae, particularly those for estimating the onset of sanding while others use numerical models, particularly in calculating sanding rate. Although major improvements have been achieved in the past decade, sanding tools are still unable to predict the sand mass and the rate of sanding for all field problems in a reliable form.

  1. The Pedestrian Evacuation Analyst: geographic information systems software for modeling hazard evacuation potential

    Science.gov (United States)

    Jones, Jeanne M.; Ng, Peter; Wood, Nathan J.

    2014-01-01

    Recent disasters such as the 2011 Tohoku, Japan, earthquake and tsunami; the 2013 Colorado floods; and the 2014 Oso, Washington, mudslide have raised awareness of catastrophic, sudden-onset hazards that arrive within minutes of the events that trigger them, such as local earthquakes or landslides. Due to the limited amount of time between generation and arrival of sudden-onset hazards, evacuations are typically self-initiated, on foot, and across the landscape (Wood and Schmidtlein, 2012). Although evacuation to naturally occurring high ground may be feasible in some vulnerable communities, evacuation modeling has demonstrated that other communities may require vertical-evacuation structures within a hazard zone, such as berms or buildings, if at-risk individuals are to survive some types of sudden-onset hazards (Wood and Schmidtlein, 2013). Researchers use both static least-cost-distance (LCD) and dynamic agent-based models to assess the pedestrian evacuation potential of vulnerable communities. Although both types of models help to understand the evacuation landscape, LCD models provide a more general overview that is independent of population distributions, which may be difficult to quantify given the dynamic spatial and temporal nature of populations (Wood and Schmidtlein, 2012). Recent LCD efforts related to local tsunami threats have focused on an anisotropic (directionally dependent) path distance modeling approach that incorporates travel directionality, multiple travel speed assumptions, and cost surfaces that reflect variations in slope and land cover (Wood and Schmidtlein, 2012, 2013). The Pedestrian Evacuation Analyst software implements this anisotropic path-distance approach for pedestrian evacuation from sudden-onset hazards, with a particular focus at this time on local tsunami threats. The model estimates evacuation potential based on elevation, direction of movement, land cover, and travel speed and creates a map showing travel times to safety (a

  2. Use of computational models to reconstruct and predict trichloroethylene exposure.

    Science.gov (United States)

    Maslia, M L; Aral, M M; Williams, R C; Williams-Fleetwood, S; Hayes, L C; Wilder, L C

    1996-01-01

    In this study, a type frequently encountered by ATSDR, groundwater and surface-water contamination have occurred near the Gratuity Road site in the town of Groton, Massachusetts. A petitioned public health assessment for the Gratuity Road site identified the primary contaminants as trichloro-ethylene (TCE), 1,1,1-trichloroethane (TCA), hexavalent chromium (Cr+6), chromium (Cr), and lead (Pb) (ATSDR 1992). The health assessment also indicated that off-site residential groundwater wells had been contaminated with TCE and TCA. Because direct measures of historical exposure to TCE are unavailable for the Gratuity Road site, computational models were used to reconstruct and predict exposure to TCE. These computational models included environmental transport and exposure models. For the environmental transport models, numerical methods were used to approximate the equations of groundwater flow and contaminant transport. Results of using environmental transport models provided us with the spatial and temporal database necessary to conduct an exposure analysis. This database indicated that groundwater concentrations of TCE typically exceeded EPA's MCL of 5 ppb for TCE. The study demonstrated that although a hazardous waste site can be remediated, nearby populations may experience significant exposure because of historical contamination, which will not be captured by remediation activities. The exposure analysis used simulated concentrations of TCE predicted by environmental transport models. These concentrations were used to compare exposure to TCE from inhalation in a one-compartment model shower with exposure from ingestion of domestic water contaminated by TCE. The exposure model indicated that exposure to TCE by the inhalation route during showering is nearly identical to exposure by ingestion of domestic water supplies contaminated with TCE. As a result, entry by inhalation route is as important as entry by ingestion route when conducting exposure analyses of

  3. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  4. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  5. Disease Prediction Models and Operational Readiness

    Science.gov (United States)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey; Noonan, Christine; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-01-01

    The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6), spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A component of this review is the identification of verification and validation (V&V) methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology Readiness

  6. Extension of Cox Proportional Hazard Model for Estimation of Interrelated Age-Period-Cohort Effects on Cancer Survival

    OpenAIRE

    Mdzinarishvili, Tengiz; Gleason, Michael X.; Kinarsky, Leo; Sherman, Simon

    2011-01-01

    In the frame of the Cox proportional hazard (PH) model, a novel two-step procedure for estimating age-period-cohort (APC) effects on the hazard function of death from cancer was developed. In the first step, the procedure estimates the influence of joint APC effects on the hazard function, using Cox PH regression procedures from a standard software package. In the second step, the coefficients for age at diagnosis, time period and birth cohort effects are estimated. To solve the identifiabili...

  7. Seismic hazard assessment of Sub-Saharan Africa using geodetic strain rate models

    Science.gov (United States)

    Poggi, Valerio; Pagani, Marco; Weatherill, Graeme; Garcia, Julio; Durrheim, Raymond J.; Mavonga Tuluka, Georges

    2016-04-01

    The East African Rift System (EARS) is the major active tectonic feature of the Sub-Saharan Africa (SSA) region. Although the seismicity level of such a divergent plate boundary can be described as moderate, several earthquakes have been reported in historical times causing a non-negligible level of damage, albeit mostly due to the high vulnerability of the local buildings and structures. Formulation and enforcement of national seismic codes is therefore an essential future risk mitigation strategy. Nonetheless, a reliable risk assessment cannot be done without the calibration of an updated seismic hazard model for the region. Unfortunately, the major issue in assessing seismic hazard in Sub-Saharan Africa is the lack of basic information needed to construct source and ground motion models. The historical earthquake record is largely incomplete, while instrumental catalogue is complete down to sufficient magnitude only for a relatively short time span. In addition, mapping of seimogenically active faults is still an on-going program. Recent studies have identified major seismogenic lineaments, but there is substantial lack of kinematic information for intermediate-to-small scale tectonic features, information that is essential for the proper calibration of earthquake recurrence models. To compensate this lack of information, we experiment the use of a strain rate model recently developed by Stamps et al. (2015) in the framework of a earthquake hazard and risk project along the EARS supported by USAID and jointly carried out by GEM and AfricaArray. We use the inferred geodetic strain rates to derive estimates of total scalar moment release, subsequently used to constrain earthquake recurrence relationships for both area (as distributed seismicity) and fault source models. The rates obtained indirectly from strain rates and more classically derived from the available seismic catalogues are then compared and combined into a unique mixed earthquake recurrence model

  8. Developing predictive models of health literacy.

    Science.gov (United States)

    Martin, Laurie T; Ruder, Teague; Escarce, José J; Ghosh-Dastidar, Bonnie; Sherman, Daniel; Elliott, Marc; Bird, Chloe E; Fremont, Allen; Gasper, Charles; Culbert, Arthur; Lurie, Nicole

    2009-11-01

    Low health literacy (LHL) remains a formidable barrier to improving health care quality and outcomes. Given the lack of precision of single demographic characteristics to predict health literacy, and the administrative burden and inability of existing health literacy measures to estimate health literacy at a population level, LHL is largely unaddressed in public health and clinical practice. To help overcome these limitations, we developed two models to estimate health literacy. We analyzed data from the 2003 National Assessment of Adult Literacy (NAAL), using linear regression to predict mean health literacy scores and probit regression to predict the probability of an individual having 'above basic' proficiency. Predictors included gender, age, race/ethnicity, educational attainment, poverty status, marital status, language spoken in the home, metropolitan statistical area (MSA) and length of time in U.S. All variables except MSA were statistically significant, with lower educational attainment being the strongest predictor. Our linear regression model and the probit model accounted for about 30% and 21% of the variance in health literacy scores, respectively, nearly twice as much as the variance accounted for by either education or poverty alone. Multivariable models permit a more accurate estimation of health literacy than single predictors. Further, such models can be applied to readily available administrative or census data to produce estimates of average health literacy and identify communities that would benefit most from appropriate, targeted interventions in the clinical setting to address poor quality care and outcomes related to LHL.

  9. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  10. Genetic models of homosexuality: generating testable predictions

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  11. System Dynamics Model to develop resilience management strategies for lifelines exposed to natural hazards

    Science.gov (United States)

    Pagano, Alessandro; Pluchinotta, Irene; Giordano, Raffaele; Vurro, Michele

    2016-04-01

    . Moving in such direction, System Dynamics Modeling (SDM) is a suitable operative approach. The SDM allows taking into account all resilience dimensions in an integrated and dynamic way. Furthermore, it allows to combine predictive and learning functionality through feedback mechanisms, and to foster active involvement of stakeholders in the modelling process. The present paper show some results of ongoing research activities. The main aim of the work is to describe using SDM, the relationships and interdependencies between drinking water supply infrastructures and societies in building the resilience of urban communities in case of natural disasters. Reflections are carried out on the comparison between two major earthquakes in Italy: L'Aquila in 2009 and Emilia Romagna in 2012. The model aims at defining a quantitative tool to assess the evolution of resilience of drinking water supply system. Specifically, it has been used to evaluate the impact of actions and strategies for resilience improvement on the dynamic evolution of the system, thus suggesting the most suitable ones.

  12. Trends Prediction Using Social Diffusion Models

    OpenAIRE

    Altshuler, Yaniv; Pan, Wei; Pentland, Alex

    2011-01-01

    The importance of the ability to predict trends in social media has been growing rapidly in the past few years with the growing dominance of social media in our everyday’s life. Whereas many works focus on the detection of anomalies in networks, there exist little theoretical work on the prediction of the likelihood of anomalous network pattern to globally spread and become “trends”. In this work we present an analytic model for the social diffusion dynamics of spreading network patterns. Our...

  13. Assessment of erosion hazard after recurrence fires with the RUSLE 3D MODEL

    Science.gov (United States)

    Vecín-Arias, Daniel; Palencia, Covadonga; Fernández Raga, María

    2016-04-01

    The objective of this work is to calculate if there is more soil erosion after the recurrence of several forest fires on an area. To that end, it has been studied an area of 22 130 ha because has a high frequency of fires. This area is located in the northwest of the Iberian Peninsula. The assessment of erosion hazard was calculated in several times using Geographic Information Systems (GIS).The area have been divided into several plots according to the number of times they have been burnt in the past 15 years. Due to the complexity that has to make a detailed study of a so large field and that there are not information available anually, it is necessary to select the more interesting moments. In august 2012 it happened the most agressive and extensive fire of the area. So the study was focused on the erosion hazard for 2011 and 2014, because they are the date before and after from the fire of 2012 in which there are orthophotos available. RUSLE3D model (Revised Universal Soil Loss Equation) was used to calculate maps erosion losses. This model improves the traditional USLE (Wischmeier and D., 1965) because it studies the influence of the concavity / convexity (Renard et al., 1997), and improves the estimation of the slope factor LS (Renard et al., 1991). It is also one of the most commonly used models in literatura (Mitasova et al., 1996; Terranova et al., 2009). The tools used are free and accessible, using GIS "gvSIG" (http://www.gvsig.com/es) and the metadata were taken from Spatial Data Infrastructure of Spain webpage (IDEE, 2016). However the RUSLE model has many critics as some authors who suggest that only serves to carry out comparisons between areas, and not for the calculation of absolute soil loss data. These authors argue that in field measurements the actual recovered eroded soil can suppose about one-third of the values obtained with the model (Šúri et al., 2002). The study of the area shows that the error detected by the critics could come from

  14. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    SILVA R. G.

    1999-01-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  15. Modeling Benthic Sediment Processes to Predict Water ...

    Science.gov (United States)

    The benthic sediment acts as a huge reservoir of particulate and dissolved material (within interstitial water) which can contribute to loading of contaminants and nutrients to the water column. A benthic sediment model is presented in this report to predict spatial and temporal benthic fluxes of nutrients and chemicals in Narragansett Bay. A benthic sediment model is presented in this report to identify benthic flux into the water column in Narragansett Bay. Benthic flux is essential to properly model water quality and ecology in estuarine and coastal systems.

  16. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  17. Flood Hazard Mapping by Using Geographic Information System and Hydraulic Model: Mert River, Samsun, Turkey

    OpenAIRE

    Vahdettin Demir; Ozgur Kisi

    2016-01-01

    In this study, flood hazard maps were prepared for the Mert River Basin, Samsun, Turkey, by using GIS and Hydrologic Engineering Centers River Analysis System (HEC-RAS). In this river basin, human life losses and a significant amount of property damages were experienced in 2012 flood. The preparation of flood risk maps employed in the study includes the following steps: (1) digitization of topographical data and preparation of digital elevation model using ArcGIS, (2) simulation of flood lows...

  18. A new approach for deriving Flood hazard maps from SAR data and global hydrodynamic models

    Science.gov (United States)

    Matgen, P.; Hostache, R.; Chini, M.; Giustarini, L.; Pappenberger, F.; Bally, P.

    2014-12-01

    With the flood consequences likely to amplify because of the growing population and ongoing accumulation of assets in flood-prone areas, global flood hazard and risk maps are needed for improving flood preparedness at large scale. At the same time, with the rapidly growing archives of SAR images of floods, there is a high potential of making use of these images for global and regional flood management. In this framework, an original method that integrates global flood inundation modeling and microwave remote sensing is presented. It takes advantage of the combination of the time and space continuity of a global inundation model with the high spatial resolution of satellite observations. The availability of model simulations over a long time period offers opportunities for estimating flood non-exceedance probabilities in a robust way. These probabilities can be attributed to historical satellite observations. Time series of SAR-derived flood extent maps and associated non-exceedance probabilities can then be combined generate flood hazard maps with a spatial resolution equal to that of the satellite images, which is most of the time higher than that of a global inundation model. In principle, this can be done for any area of interest in the world, provided that a sufficient number of relevant remote sensing images are available. As a test case we applied the method on the Severn River (UK) and the Zambezi River (Mozambique), where large archives of Envisat flood images can be exploited. The global ECMWF flood inundation model is considered for computing the statistics of extreme events. A comparison with flood hazard maps estimated with in situ measured discharge is carried out. The first results confirm the potentiality of the method. However, further developments on two aspects are required to improve the quality of the hazard map and to ensure the acceptability of the product by potential end user organizations. On the one hand, it is of paramount importance to

  19. Assessing Glacial Lake Outburst Flood Hazard in the Nepal Himalayas using Satellite Imagery and Hydraulic Models

    Science.gov (United States)

    Rounce, D.; McKinney, D. C.

    2015-12-01

    The last half century has witnessed considerable glacier melt that has led to the formation of large glacial lakes. These glacial lakes typically form behind terminal moraines comprising loose boulders, debris, and soil, which are susceptible to fail and cause a glacial lake outburst flood (GLOF). These lakes also act as a heat sink that accelerates glacier melt and in many cases is accompanied by rapid areal expansion. As these glacial lakes continue to grow, their hazard also increases due to the increase in potential flood volume and the lakes' proximity to triggering events such as avalanches and landslides. Despite the large threat these lakes may pose to downstream communities, there are few detailed studies that combine satellite imagery with hydraulic models to present a holistic understanding of the GLOF hazard. The aim of this work is to assess the GLOF hazard of glacial lakes in Nepal using a holistic approach based on a combination of satellite imagery and hydraulic models. Imja Lake will be the primary focus of the modeling efforts, but the methods will be developed in a manner that is transferable to other potentially dangerous glacial lakes in Nepal.

  20. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  1. Predicting the movement of pumice rafts in the South Pacific using GNOME for enhanced navigational warnings and coastal hazard management policies

    Science.gov (United States)

    Kelly, J.; Bender, M.; Kelly, M.; Walters, C.

    2013-12-01

    Pumice rafts formed from explosive shallow submarine eruptions in the South Pacific pose a significant hazard to local maritime transportation and global coastal communities. Local concerns include the possibility of individual pumice clasts blocking seawater intake valves of ships, damaging the hull of smaller vessels, and inundating harbors bringing fishing and transport to a standstill. Additionally, pumice rafts can introduce harmful invasive species to delicate coastal communities around the world as they dramatically increase dispersal distances for otherwise benthic or relatively sedentary organisms. Two volcanoes in this region have recently formed pumice rafts: Home Reef volcano (Tonga) in 2006 and Havre Seamount (Kermadec Islands) in 2012. These raft events were used as case studies to test a trajectory prediction model since they occurred during times at which high spatial and temporal resolution satellite data were being collected and/or have been described in peer reviewed literature, both of which were necessary for providing model validation. The model was created using the General NOAA Observational Modeling Environment (GNOME), which utilizes sea surface winds and sea surface height (SSH) datasets to predict the possible trajectory a pollutant might follow on a body of water. Wind and ocean current data were acquired from the SeaWinds and Poseidon-3 sensors on board the NASA Earth Observing System (EOS) satellites QuikSCAT and Jason-2. Model outputs showed the 2012 Havre Seamount raft rapidly disperse as it drifted in an ENE direction and the 2006 Home Reef raft drifted quickly in a NW direction towards Papua New Guinea. The 2006 Home Reef prediction model was validated by comparing it to another published model that was based on an integrated surface velocity field in addition to in situ observations. The 2012 Havre Seamount prediction model was validated by spatially and temporally correlating the GNOME trajectory output with moderate

  2. Modeling hazardous mass flows Geoflows09: Mathematical and computational aspects of modeling hazardous geophysical mass flows; Seattle, Washington, 9–11 March 2009

    Science.gov (United States)

    Iverson, Richard M.; LeVeque, Randall J.

    2009-01-01

    A recent workshop at the University of Washington focused on mathematical and computational aspects of modeling the dynamics of dense, gravity-driven mass movements such as rock avalanches and debris flows. About 30 participants came from seven countries and brought diverse backgrounds in geophysics; geology; physics; applied and computational mathematics; and civil, mechanical, and geotechnical engineering. The workshop was cosponsored by the U.S. Geological Survey Volcano Hazards Program, by the U.S. National Science Foundation through a Vertical Integration of Research and Education (VIGRE) in the Mathematical Sciences grant to the University of Washington, and by the Pacific Institute for the Mathematical Sciences. It began with a day of lectures open to the academic community at large and concluded with 2 days of focused discussions and collaborative work among the participants.

  3. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  4. Reformatting Meteorological Data for use in the Hazard Prediction and Assessment Capability

    Science.gov (United States)

    2004-11-01

    forecast data from mesoscale model runs. In Australia, this meteorological data is produced by the Bureau of Meteorology (BoM). HPAC was developed in the...be interpreted by HPAC. The Bureau of Meteorology (BoM) collects large amounts of observational data from across the country and uses Numerical Weather...supplied by the Bureau of Meteorology are stored in a binary form and contain the variables shown below in Table 3. Table 3: Current NetCDF format

  5. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  6. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  7. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 storm-hazard projections

    Science.gov (United States)

    Barnard, Patrick; Erikson, Li; O'Neill, Andrea; Foxgrover, Amy; Herdman, Liv

    2017-01-01

    The Coastal Storm Modeling System (CoSMoS) makes detailed predictions (meter-scale) over large geographic scales (100s of kilometers) of storm-induced coastal flooding and erosion for both current and future SLR scenarios, as well as long-term shoreline change and cliff retreat.  Resulting projections for future climate scenarios (sea-level rise and storms) provide emergency responders and coastal planners with critical storm-hazards information that can be used to increase public safety, mitigate physical damages, and more effectively manage and allocate resources within complex coastal settings. Several versions of CoSMoS have been implemented for areas of the California coast, including Southern California, Central California, and San Francisco Bay, and further versions will be incorporated as additional regions and improvements are developed.

  8. Decentralized Model Predictive Control via Dual Decomposition

    Science.gov (United States)

    Wakasa, Yuji; Arakawa, Mizue; Tanaka, Kanya; Akashi, Takuya

    This paper proposes a decentralized model predictive control method based on a dual decomposition technique. A model predictive control problem for a system with multiple subsystems is formulated as a convex optimization problem. In particular, we deal with the case where the control outputs of the subsystems have coupling constraints represented by linear equalities. A dual decomposition technique is applied to this problem in order to derive the dual problem with decoupled equality constraints. A projected subgradient method is used to solve the dual problem, which leads to a decentralized algorithm. In the algorithm, a small-scale problem is solved at each subsystem, and information exchange is performed in each group consisting of some subsystems. Also, it is shown that the computational complexity in the decentralized algorithm is reduced if the dynamics of the subsystems are all the same.

  9. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  10. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  11. Space Weather: Measurements, Models and Predictions

    Science.gov (United States)

    2014-03-21

    AFRL-RV-PS- AFRL-RV-PS- TR-2014-0041 TR-2014-0041 SPACE WEATHER: MEASUREMENTS, MODELS AND PREDICTIONS Patricia H. Doherty, et al...GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S) Patricia H. Doherty, David Webb, Stuart Huston, Thomas Kuchar, Donald Mizuno...events are useful for the calibration of proton detectors because they bathe the outer magnetosphere (beyond Lm ~ 6.6) in a relatively uniform flux of

  12. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2017-11-02

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. BEYOND FLOOD HAZARD MAPS: DETAILED FLOOD CHARACTERIZATION WITH REMOTE SENSING, GIS AND 2D MODELLING

    Directory of Open Access Journals (Sweden)

    J. R. Santillan

    2016-09-01

    Full Text Available Flooding is considered to be one of the most destructive among many natural disasters such that understanding floods and assessing the risks associated to it are becoming more important nowadays. In the Philippines, Remote Sensing (RS and Geographic Information System (GIS are two main technologies used in the nationwide modelling and mapping of flood hazards. Although the currently available high resolution flood hazard maps have become very valuable, their use for flood preparedness and mitigation can be maximized by enhancing the layers of information these maps portrays. In this paper, we present an approach based on RS, GIS and two-dimensional (2D flood modelling to generate new flood layers (in addition to the usual flood depths and hazard layers that are also very useful in flood disaster management such as flood arrival times, flood velocities, flood duration, flood recession times, and the percentage within a given flood event period a particular location is inundated. The availability of these new layers of flood information are crucial for better decision making before, during, and after occurrence of a flood disaster. The generation of these new flood characteristic layers is illustrated using the Cabadbaran River Basin in Mindanao, Philippines as case study area. It is envisioned that these detailed maps can be considered as additional inputs in flood disaster risk reduction and management in the Philippines.

  14. Assessment and indirect adjustment for confounding by smoking in cohort studies using relative hazards models.

    Science.gov (United States)

    Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R

    2014-11-01

    Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Breast cancer risks and risk prediction models.

    Science.gov (United States)

    Engel, Christoph; Fischer, Christine

    2015-02-01

    BRCA1/2 mutation carriers have a considerably increased risk to develop breast and ovarian cancer. The personalized clinical management of carriers and other at-risk individuals depends on precise knowledge of the cancer risks. In this report, we give an overview of the present literature on empirical cancer risks, and we describe risk prediction models that are currently used for individual risk assessment in clinical practice. Cancer risks show large variability between studies. Breast cancer risks are at 40-87% for BRCA1 mutation carriers and 18-88% for BRCA2 mutation carriers. For ovarian cancer, the risk estimates are in the range of 22-65% for BRCA1 and 10-35% for BRCA2. The contralateral breast cancer risk is high (10-year risk after first cancer 27% for BRCA1 and 19% for BRCA2). Risk prediction models have been proposed to provide more individualized risk prediction, using additional knowledge on family history, mode of inheritance of major genes, and other genetic and non-genetic risk factors. User-friendly software tools have been developed that serve as basis for decision-making in family counseling units. In conclusion, further assessment of cancer risks and model validation is needed, ideally based on prospective cohort studies. To obtain such data, clinical management of carriers and other at-risk individuals should always be accompanied by standardized scientific documentation.

  16. Probabilistic prediction models for aggregate quarry siting

    Science.gov (United States)

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  17. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  18. Predicting extinction rates in stochastic epidemic models

    Science.gov (United States)

    Schwartz, Ira B.; Billings, Lora; Dykman, Mark; Landsman, Alexandra

    2009-01-01

    We investigate the stochastic extinction processes in a class of epidemic models. Motivated by the process of natural disease extinction in epidemics, we examine the rate of extinction as a function of disease spread. We show that the effective entropic barrier for extinction in a susceptible-infected-susceptible epidemic model displays scaling with the distance to the bifurcation point, with an unusual critical exponent. We make a direct comparison between predictions and numerical simulations. We also consider the effect of non-Gaussian vaccine schedules, and show numerically how the extinction process may be enhanced when the vaccine schedules are Poisson distributed.

  19. Seismic hazard assessment in central Ionian Islands area (Greece) based on stress release models

    Science.gov (United States)

    Votsi, Irene; Tsaklidis, George; Papadimitriou, Eleftheria

    2011-08-01

    The long-term probabilistic seismic hazard of central Ionian Islands (Greece) is studied through the application of stress release models. In order to identify statistically distinct regions, the study area is divided into two subareas, namely Kefalonia and Lefkada, on the basis of seismotectonic properties. Previous results evidenced the existence of stress transfer and interaction between the Kefalonia and Lefkada fault segments. For the consideration of stress transfer and interaction, the linked stress release model is applied. A new model is proposed, where the hazard rate function in terms of X(t) has the form of the Weibull distribution. The fitted models are evaluated through residual analysis and the best of them is selected through the Akaike information criterion. Based on AIC, the results demonstrate that the simple stress release model fits the Ionian data better than the non-homogeneous Poisson and the Weibull models. Finally, the thinning simulation method is applied in order to produce simulated data and proceed to forecasting.

  20. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  1. Flood Hazard Mapping by Using Geographic Information System and Hydraulic Model: Mert River, Samsun, Turkey

    Directory of Open Access Journals (Sweden)

    Vahdettin Demir

    2016-01-01

    Full Text Available In this study, flood hazard maps were prepared for the Mert River Basin, Samsun, Turkey, by using GIS and Hydrologic Engineering Centers River Analysis System (HEC-RAS. In this river basin, human life losses and a significant amount of property damages were experienced in 2012 flood. The preparation of flood risk maps employed in the study includes the following steps: (1 digitization of topographical data and preparation of digital elevation model using ArcGIS, (2 simulation of flood lows of different return periods using a hydraulic model (HEC-RAS, and (3 preparation of flood risk maps by integrating the results of (1 and (2.

  2. Basic features of the predictive tools of early warning systems for water-related natural hazards: examples for shallow landslides

    Directory of Open Access Journals (Sweden)

    R. Greco

    2017-12-01

    Full Text Available To manage natural risks, an increasing effort is being put in the development of early warning systems (EWS, namely, approaches facing catastrophic phenomena by timely forecasting and alarm spreading throughout exposed population. Research efforts aimed at the development and implementation of effective EWS should especially concern the definition and calibration of the interpretative model. This paper analyses the main features characterizing predictive models working in EWS by discussing their aims and their features in terms of model accuracy, evolutionary stage of the phenomenon at which the prediction is carried out and model architecture. Original classification criteria based on these features are developed throughout the paper and shown in their practical implementation through examples of flow-like landslides and earth flows, both of which are characterized by rapid evolution and quite representative of many applications of EWS.

  3. Basic features of the predictive tools of early warning systems for water-related natural hazards: examples for shallow landslides

    Science.gov (United States)

    Greco, Roberto; Pagano, Luca

    2017-12-01

    To manage natural risks, an increasing effort is being put in the development of early warning systems (EWS), namely, approaches facing catastrophic phenomena by timely forecasting and alarm spreading throughout exposed population. Research efforts aimed at the development and implementation of effective EWS should especially concern the definition and calibration of the interpretative model. This paper analyses the main features characterizing predictive models working in EWS by discussing their aims and their features in terms of model accuracy, evolutionary stage of the phenomenon at which the prediction is carried out and model architecture. Original classification criteria based on these features are developed throughout the paper and shown in their practical implementation through examples of flow-like landslides and earth flows, both of which are characterized by rapid evolution and quite representative of many applications of EWS.

  4. Clinical Predictive Modeling Development and Deployment through FHIR Web Services

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction. PMID:26958207

  5. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  6. Predictive Modeling by the Cerebellum Improves Proprioception

    Science.gov (United States)

    Bhanpuri, Nasir H.; Okamura, Allison M.

    2013-01-01

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance. PMID:24005283

  7. Proteome coverage prediction with infinite Markov models

    Science.gov (United States)

    Claassen, Manfred; Aebersold, Ruedi; Buhmann, Joachim M.

    2009-01-01

    Motivation: Liquid chromatography tandem mass spectrometry (LC-MS/MS) is the predominant method to comprehensively characterize complex protein mixtures such as samples from prefractionated or complete proteomes. In order to maximize proteome coverage for the studied sample, i.e. identify as many traceable proteins as possible, LC-MS/MS experiments are typically repeated extensively and the results combined. Proteome coverage prediction is the task of estimating the number of peptide discoveries of future LC-MS/MS experiments. Proteome coverage prediction is important to enhance the design of efficient proteomics studies. To date, there does not exist any method to reliably estimate the increase of proteome coverage at an early stage. Results: We propose an extended infinite Markov model DiriSim to extrapolate the progression of proteome coverage based on a small number of already performed LC-MS/MS experiments. The method explicitly accounts for the uncertainty of peptide identifications. We tested DiriSim on a set of 37 LC-MS/MS experiments of a complete proteome sample and demonstrated that DiriSim correctly predicts the coverage progression already from a small subset of experiments. The predicted progression enabled us to specify maximal coverage for the test sample. We demonstrated that quality requirements on the final proteome map impose an upper bound on the number of useful experiment repetitions and limit the achievable proteome coverage. Contact: manfredc@inf.ethz.ch; jbuhmann@inf.ethz.ch PMID:19477982

  8. Use of agent-based modelling in emergency management under a range of flood hazards

    Directory of Open Access Journals (Sweden)

    Tagg Andrew

    2016-01-01

    Full Text Available The Life Safety Model (LSM was developed some 15 years ago, originally for dam break assessments and for informing reservoir evacuation and emergency plans. Alongside other technological developments, the model has evolved into a very useful agent-based tool, with many applications for a range of hazards and receptor behaviour. HR Wallingford became involved in its use in 2006, and is now responsible for its technical development and commercialisation. Over the past 10 years the model has been applied to a range of flood hazards, including coastal surge, river flood, dam failure and tsunami, and has been verified against historical events. Commercial software licences are being used in Canada, Italy, Malaysia and Australia. A core group of LSM users and analysts has been specifying and delivering a programme of model enhancements. These include improvements to traffic behaviour at intersections, new algorithms for sheltering in high-rise buildings, and the addition of monitoring points to allow detailed analysis of vehicle and pedestrian movement. Following user feedback, the ability of LSM to handle large model ‘worlds’ and hydrodynamic meshes has been improved. Recent developments include new documentation, performance enhancements, better logging of run-time events and bug fixes. This paper describes some of the recent developments and summarises some of the case study applications, including dam failure analysis in Japan and mass evacuation simulation in England.

  9. Methodology for prediction and estimation of consequences of possible atmospheric releases of hazardous matter: 'Kursk' submarine study

    Directory of Open Access Journals (Sweden)

    A. Baklanov

    2003-01-01

    Full Text Available There are objects with some periods of higher than normal levels of risk of accidental atmospheric releases (nuclear, chemical, biological, etc.. Such accidents or events may occur due to natural hazards, human errors, terror acts, and during transportation of waste or various operations at high risk. A methodology for risk assessment is suggested and it includes two approaches: 1 probabilistic analysis of possible atmospheric transport patterns using long-term trajectory and dispersion modelling, and 2 forecast and evaluation of possible contamination and consequences for the environment and population using operational dispersion modelling. The first approach could be applied during the preparation stage, and the second - during the operation stage. The suggested methodology is applied on an example of the most important phases (lifting, transportation, and decommissioning of the ``Kursk" nuclear submarine operation. It is found that the temporal variability of several probabilistic indicators (fast transport probability fields, maximum reaching distance, maximum possible impact zone, and average integral concentration of 137Cs showed that the fall of 2001 was the most appropriate time for the beginning of the operation. These indicators allowed to identify the hypothetically impacted geographical regions and territories. In cases of atmospheric transport toward the most populated areas, the forecasts of possible consequences during phases of the high and medium potential risk levels based on a unit hypothetical release (e.g. 1 Bq are performed. The analysis showed that the possible deposition fractions of 10-11 (Bq/m2 over the Kola Peninsula, and 10-12 - 10-13 (Bq/m2 for the remote areas of the Scandinavia and Northwest Russia could be observed. The suggested methodology may be used successfully for any potentially dangerous object involving risk of atmospheric release of hazardous materials of nuclear, chemical or biological nature.

  10. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  11. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  12. Predicting FLDs Using a Multiscale Modeling Scheme

    Science.gov (United States)

    Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.

    2017-09-01

    The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.

  13. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  14. Climate change in a Point-Over-Threshold model: an example on ocean-wave-storm hazard in NE Spain

    Science.gov (United States)

    Tolosana-Delgado, R.; Ortego, M. I.; Egozcue, J. J.; Sánchez-Arcilla, A.

    2009-09-01

    Climatic change is a problem of general concern. When dealing with hazardous events such as wind-storms, heavy rainfall or ocean-wave storms this concern is even more serious. Climate change might imply an increase of human and material losses, and it is worth devoting efforts to detect it. Hazard assessment of such events is often carried out with a point-over-threshold (POT) model. Time-occurrence of events is assumed to be Poisson distributed, and the magnitude of each event is modelled as an arbitrary random variable, which upper tail is described by a Generalized Pareto Distribution (GPD). Independence between this magnitude and occurrence in time is assumed, as well as independence from event to event. The GPD models excesses over a threshold. If X is the magnitude of an event and x0 a value of the support of X, the excess over the threshold x0 is Y = X - x0, conditioned to X > x0. Therefore, the support of Y is (a segment of) the positive real line. The GPD model has a scale and a shape parameter. The scale parameter of the distribution is β > 0. The shape parameter, ? is real-valued, and it defines three different sub-families of distributions. GPD distributions with ? 0, distributions have infinite heavy tails (ysup = +? ), and for ? = 0 we obtain the exponential distribution, which has an infinite support but a well-behaved tail. The GPD distribution function is ( ? )- 1 ? FY(y|β,?) = 1- 1+ β-y , 0 ? y logistic, ...) function of time, etc. For hazardous phenomena with a physical upper limit, the parsimonious choice is to consider a lineal change on v with time, whilst ? remains constant. Then, the climate change is assessed by the change on v(t) = ? 0 + t ? ? , with competing models: M0 : ? ? = 0 vs. M1 : ? ? ? = 0 These issues are illustrated using a set of 18 years of significant-ocean-wave-height data measured in a buoy in front of the Ebro delta. A Bayesian joint estimation of parameters is carried out. Posterior and predictive distributions are

  15. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  16. A transparent and data-driven global tectonic regionalisation model for seismic hazard assessment

    Science.gov (United States)

    Chen, Yen-Shin; Weatherill, Graeme; Pagani, Marco; Cotton, Fabrice

    2018-01-01

    A key concept that is common to many assumptions inherent within seismic hazard assessment is that of tectonic similarity. This recognises that certain regions of the globe may display similar geophysical characteristics, such as in the attenuation of seismic waves, the magnitude scaling properties of seismogenic sources or the seismic coupling of the lithosphere. Previous attempts at tectonic regionalisation, particularly within a seismic hazard assessment context, have often been based on expert judgements; in most of these cases, the process for delineating tectonic regions is neither reproducible nor consistent from location to location. In this work, the regionalisation process is implemented in a scheme that is reproducible, comprehensible from a geophysical rationale, and revisable when new relevant data are published. A spatial classification-scheme is developed based on fuzzy logic, enabling the quantification of concepts that are approximate rather than precise. Using the proposed methodology, we obtain a transparent and data-driven global tectonic regionalisation model for seismic hazard applications as well as the subjective probabilities (e.g. degree of being active/degree of being cratonic) indicate the degree to which a site belongs in a tectonic category.

  17. Predictive Models for Normal Fetal Cardiac Structures.

    Science.gov (United States)

    Krishnan, Anita; Pike, Jodi I; McCarter, Robert; Fulgium, Amanda L; Wilson, Emmanuel; Donofrio, Mary T; Sable, Craig A

    2016-12-01

    Clinicians rely on age- and size-specific measures of cardiac structures to diagnose cardiac disease. No universally accepted normative data exist for fetal cardiac structures, and most fetal cardiac centers do not use the same standards. The aim of this study was to derive predictive models for Z scores for 13 commonly evaluated fetal cardiac structures using a large heterogeneous population of fetuses without structural cardiac defects. The study used archived normal fetal echocardiograms in representative fetuses aged 12 to 39 weeks. Thirteen cardiac dimensions were remeasured by a blinded echocardiographer from digitally stored clips. Studies with inadequate imaging views were excluded. Regression models were developed to relate each dimension to estimated gestational age (EGA) by dates, biparietal diameter, femur length, and estimated fetal weight by the Hadlock formula. Dimension outcomes were transformed (e.g., using the logarithm or square root) as necessary to meet the normality assumption. Higher order terms, quadratic or cubic, were added as needed to improve model fit. Information criteria and adjusted R 2 values were used to guide final model selection. Each Z-score equation is based on measurements derived from 296 to 414 unique fetuses. EGA yielded the best predictive model for the majority of dimensions; adjusted R 2 values ranged from 0.72 to 0.893. However, each of the other highly correlated (r > 0.94) biometric parameters was an acceptable surrogate for EGA. In most cases, the best fitting model included squared and cubic terms to introduce curvilinearity. For each dimension, models based on EGA provided the best fit for determining normal measurements of fetal cardiac structures. Nevertheless, other biometric parameters, including femur length, biparietal diameter, and estimated fetal weight provided results that were nearly as good. Comprehensive Z-score results are available on the basis of highly predictive models derived from gestational

  18. Lagrangian predictability characteristics of an Ocean Model

    Science.gov (United States)

    Lacorata, Guglielmo; Palatella, Luigi; Santoleri, Rosalia

    2014-11-01

    The Mediterranean Forecasting System (MFS) Ocean Model, provided by INGV, has been chosen as case study to analyze Lagrangian trajectory predictability by means of a dynamical systems approach. To this regard, numerical trajectories are tested against a large amount of Mediterranean drifter data, used as sample of the actual tracer dynamics across the sea. The separation rate of a trajectory pair is measured by computing the Finite-Scale Lyapunov Exponent (FSLE) of first and second kind. An additional kinematic Lagrangian model (KLM), suitably treated to avoid "sweeping"-related problems, has been nested into the MFS in order to recover, in a statistical sense, the velocity field contributions to pair particle dispersion, at mesoscale level, smoothed out by finite resolution effects. Some of the results emerging from this work are: (a) drifter pair dispersion displays Richardson's turbulent diffusion inside the [10-100] km range, while numerical simulations of MFS alone (i.e., without subgrid model) indicate exponential separation; (b) adding the subgrid model, model pair dispersion gets very close to observed data, indicating that KLM is effective in filling the energy "mesoscale gap" present in MFS velocity fields; (c) there exists a threshold size beyond which pair dispersion becomes weakly sensitive to the difference between model and "real" dynamics; (d) the whole methodology here presented can be used to quantify model errors and validate numerical current fields, as far as forecasts of Lagrangian dispersion are concerned.

  19. Turning the rumor of May 11, 2011 earthquake prediction In Rome, Italy, into an information day on earthquake hazard

    Science.gov (United States)

    Amato, A.; Cultrera, G.; Margheriti, L.; Nostro, C.; Selvaggi, G.; INGVterremoti Team

    2011-12-01

    headquarters until 9 p.m.: families, school classes with and without teachers, civil protection groups, journalists. This initiative, built up in a few weeks, had a very large feedback, also due to the media highlighting the presumed prediction. Although we could not rule out the possibility of a strong earthquake in central Italy (with effects in Rome) we tried to explain the meaning of short term earthquake prediction vs. probabilistic seismic hazard assessment. Despite many people remained with the fear (many decided to take a day off and leave the town or stay in public parks), we contributed to reduce this feeling and therefore the social cost of this strange Roman day. Moreover, another lesson learned is that these (fortunately sporadic) circumstances, when people's attention is high, are important opportunities for science communication. We thank all the INGV colleagues who contributed to the May 11 Open Day, in particular the Press Office, the Educational and Outreach laboratory, the Graphics Laboratory and SissaMedialab. P.S. no large earthquake happened

  20. Large Scale Debris-flow Hazard Assessment : A Geotechnical Approach and Gis Modelling

    Science.gov (United States)

    Delmonaco, G.; Leoni, G.; Margottini, C.; Puglisi, C.; Spizzichino, D.

    A deterministic approach has been developed for large-scale landslide hazard analysis carried out by ENEA, the Italian Agency for New Technologies, Energy and Environ- ment, in the framework of TEMRAP- The European Multi-Hazard Risk Assessment Project, finalised to the application of methodologies to incorporate the reduction of natural disasters. The territory of Versilia, and in particular the basin of Vezza river (60 Km2), has been chosen as test area of the project. The Vezza river basin, was affected by over 250 shallow landslides (debris/earth flow) mainly involving the metamorphic geological formations outcropping in the area triggered by the hydro-meteorological event of 19th June 1996. Many approaches and methodologies have been proposed in the scientific literature aimed at assessing landslide hazard and risk, depending es- sentially on scope of work, availability of data and scale of representation. In the last decades landslide hazard and risk analyses have been favoured by the development of GIS techniques that have permitted to generalise, synthesise and model the stability conditions at large scale (>1:10.000) investigation. In this work, the main results de- rived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. The deterministic analysis has been developed through the following steps: 1) elaboration of a landslide inventory map through aerial photo interpretation and direct field survey; 2) genera- tion of a data-base and digital maps; 3) elaboration of a DTM and slope angle map; 4) definition of a superficial soil thickness map; 5) litho-technical soil characterisation, through implementation of a back-analysis on test slopes and laboratory test analy- sis; 6) inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation; 7) implementation of a slope stability model (in- finite slope model) and

  1. MODELING OF THE BUILDING LOCAL PROTECTION (SHELTER – IN PLACE INCLUDING SORBTION OF THE HAZARDOUS CONTAMINANT ON INDOOR SURFACES

    Directory of Open Access Journals (Sweden)

    N. N. Belyayev

    2014-05-01

    Full Text Available Purpose. Chemically hazardous objects, where toxic substances are used, manufactured and stored, and also main lines, on which the hazardous materials transportation is conducted, pose potential sources of atmosphere accidental pollution.Development of the CFD model for evaluating the efficiency of the building local protection from hazardous substantives ingress by using air curtain and sorption/desorption of hazardous substance on indoor surfaces. Methodology. To solve the problem of hydrodynamic interaction of the air curtain with wind flow and considering the building influence on this process the model of ideal fluid is used. In order to calculate the transfer process of the hazardous substance in the atmosphere an equation of convection-diffusion transport of impurities is applied. To calculate the process of indoors air pollution under leaking of foul air Karisson & Huber model is used. This model takes into account the sorption of the hazardous substance at various indoors surfaces. For the numerical integration of the model equations differential methods are used. Findings. In this paper we construct an efficient CFD model of evaluating the effectiveness of the buildings protection against ingress of hazardous substances through the use of an air curtain. On the basis of the built model a computational experiment to assess the effectiveness of this protection method under varying the location of the air curtain relative to the building was carried out. Originality. A new model was developed to compute the effectiveness of the air curtain supply to reduce the toxic chemical concentration inside the building. Practical value. The developed model can be used for design of the building local protection against ingress of hazardous substances.

  2. Development and Validation of a Predictive Model for Functional Outcome After Stroke Rehabilitation: The Maugeri Model.

    Science.gov (United States)

    Scrutinio, Domenico; Lanzillo, Bernardo; Guida, Pietro; Mastropasqua, Filippo; Monitillo, Vincenzo; Pusineri, Monica; Formica, Roberto; Russo, Giovanna; Guarnaschelli, Caterina; Ferretti, Chiara; Calabrese, Gianluigi

    2017-12-01

    Prediction of outcome after stroke rehabilitation may help clinicians in decision-making and planning rehabilitation care. We developed and validated a predictive tool to estimate the probability of achieving improvement in physical functioning (model 1) and a level of independence requiring no more than supervision (model 2) after stroke rehabilitation. The models were derived from 717 patients admitted for stroke rehabilitation. We used multivariable logistic regression analysis to build each model. Then, each model was prospectively validated in 875 patients. Model 1 included age, time from stroke occurrence to rehabilitation admission, admission motor and cognitive Functional Independence Measure scores, and neglect. Model 2 included age, male gender, time since stroke onset, and admission motor and cognitive Functional Independence Measure score. Both models demonstrated excellent discrimination. In the derivation cohort, the area under the curve was 0.883 (95% confidence intervals, 0.858-0.910) for model 1 and 0.913 (95% confidence intervals, 0.884-0.942) for model 2. The Hosmer-Lemeshow χ2 was 4.12 (P=0.249) and 1.20 (P=0.754), respectively. In the validation cohort, the area under the curve was 0.866 (95% confidence intervals, 0.840-0.892) for model 1 and 0.850 (95% confidence intervals, 0.815-0.885) for model 2. The Hosmer-Lemeshow χ2 was 8.86 (P=0.115) and 34.50 (P=0.001), respectively. Both improvement in physical functioning (hazard ratios, 0.43; 0.25-0.71; P=0.001) and a level of independence requiring no more than supervision (hazard ratios, 0.32; 0.14-0.68; P=0.004) were independently associated with improved 4-year survival. A calculator is freely available for download at https://goo.gl/fEAp81. This study provides researchers and clinicians with an easy-to-use, accurate, and validated predictive tool for potential application in rehabilitation research and stroke management. © 2017 American Heart Association, Inc.

  3. Modeling landslide runout dynamics and hazards: crucial effects of initial conditions

    Science.gov (United States)

    Iverson, R. M.; George, D. L.

    2016-12-01

    Physically based numerical models can provide useful tools for forecasting landslide runout and associated hazards, but only if the models employ initial conditions and parameter values that faithfully represent the states of geological materials on slopes. Many models assume that a landslide begins from a heap of granular material poised on a slope and held in check by an imaginary dam. A computer instruction instantaneously removes the dam, unleashing a modeled landslide that accelerates under the influence of a large force imbalance. Thus, an unrealistically large initial acceleration influences all subsequent modeled motion. By contrast, most natural landslides are triggered by small perturbations of statically balanced effective stress states, which are commonly caused by rainfall, snowmelt, or earthquakes. Landslide motion begins with an infinitesimal force imbalance and commensurately small acceleration. However, a small initial force imbalance can evolve into a much larger imbalance if feedback causes a reduction in resisting forces. A well-documented source of such feedback involves dilatancy coupled to pore-pressure evolution, which may either increase or decrease effective Coulomb friction—contingent on initial conditions. Landslide dynamics models that account for this feedback include our D-Claw model (Proc. Roy. Soc. Lon., Ser. A, 2014, doi: 10.1098/rspa.2013.0819 and doi:10.1098/rspa.2013.0820) and a similar model presented by Bouchut et al. (J. Fluid Mech., 2016, doi:10.1017/jfm.2016.417). We illustrate the crucial effects of initial conditions and dilatancy coupled to pore-pressure feedback by using D-Claw to perform simple test calculations and also by computing alternative behaviors of the well-documented Oso, Washington, and West Salt Creek, Colorado, landslides of 2014. We conclude that realistic initial conditions and feedbacks are essential elements in numerical models used to forecast landslide runout dynamics and hazards.

  4. First look at changes in flood hazard in the Inter-Sectoral Impact Model Intercomparison Project ensemble

    National Research Council Canada - National Science Library

    Rutger Dankers; Nigel W. Arnell; Douglas B. Clark; Pete D. Falloon; Balázs M. Fekete; Simon N. Gosling; Jens Heinke; Hyungjun Kim; Yoshimitsu Masaki; Yusuke Satoh; Tobias Stacke; Yoshihide Wada; Dominik Wisser

    2014-01-01

    .... In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale...

  5. Evaluation of MEDALUS model for desertification hazard zonation using GIS; study area: Iyzad Khast plain, Iran.

    Science.gov (United States)

    Farajzadeh, Manuchehr; Egbal, Mahbobeh Nik

    2007-08-15

    In this study, the MEDALUS model along with GIS mapping techniques are used to determine desertification hazards for a province of Iran to determine the desertification hazard. After creating a desertification database including 20 parameters, the first steps consisted of developing maps of four indices for the MEDALUS model including climate, soil, vegetation and land use were prepared. Since these parameters have mostly been presented for the Mediterranean region in the past, the next step included the addition of other indicators such as ground water and wind erosion. Then all of the layers weighted by environmental conditions present in the area were used (following the same MEDALUS framework) before a desertification map was prepared. The comparison of two maps based on the original and modified MEDALUS models indicates that the addition of more regionally-specific parameters into the model allows for a more accurate representation of desertification processes across the Iyzad Khast plain. The major factors affecting desertification in the area are climate, wind erosion and low land quality management, vegetation degradation and the salinization of soil and water resources.

  6. Application of hazard models for patients with breast cancer in Cuba

    Science.gov (United States)

    Alfonso, Anet Garcia; de Oca, Néstor Arcia Montes

    2011-01-01

    There has been a rapid development in hazard models and survival analysis in the last decade. This article aims to assess the overall survival time of breast cancer in Cuba, as well as to determine plausible factors that may have a significant impact in the survival time. The data are obtained from the National Cancer Register of Cuba. The data set used in this study relates to 6381 patients diagnosed with breast cancer between January 2000 and December 2002. Follow-up data are available until the end of December 2007, by which time 2167 (33.9%) had died and 4214 (66.1%) were still alive. The adequacy of six parametric models is assessed by using their Akaike information criterion values. Five of the six parametric models (Exponential, Weibull, Log-logistic, Lognormal, and Generalized Gamma) are parameterized by using the accelerated failure-time metric, and the Gompertz model is parameterized by using the proportional hazard metric. The main result in terms of survival is found for the different categories of the clinical stage covariate. The survival time among patients who have been diagnosed at early stage of breast cancer is about 60% higher than the one among patients diagnosed at more advanced stage of the disease. Differences among provinces have not been found. The age is another significant factor, but there is no important difference between patient ages. PMID:21686138

  7. One model to predict them all : Predicting energy behaviours with the norm activation model

    NARCIS (Netherlands)

    van der Werff, Ellen; Steg, Linda

    2015-01-01

    One of the most influential models explaining how and which (normative) factors influence environmental behaviour is the norm activation model (NAM). In support of the compatibility principle, research revealed that the NAM predicts behaviour best when all variables are measured on the same level of

  8. Earthquake catalogs for the 2017 Central and Eastern U.S. short-term seismic hazard model

    Science.gov (United States)

    Mueller, Charles S.

    2017-01-01

    The U. S. Geological Survey (USGS) makes long-term seismic hazard forecasts that are used in building codes. The hazard models usually consider only natural seismicity; non-tectonic (man-made) earthquakes are excluded because they are transitory or too small. In the past decade, however, thousands of earthquakes related to underground fluid injection have occurred in the central and eastern U.S. (CEUS), and some have caused damage.  In response, the USGS is now also making short-term forecasts that account for the hazard from these induced earthquakes. Seismicity statistics are analyzed to develop recurrence models, accounting for catalog completeness. In the USGS hazard modeling methodology, earthquakes are counted on a map grid, recurrence models are applied to estimate the rates of future earthquakes in each grid cell, and these rates are combined with maximum-magnitude models and ground-motion models to compute the hazard The USGS published a forecast for the years 2016 and 2017.Here, we document the development of the seismicity catalogs for the 2017 CEUS short-term hazard model.  A uniform earthquake catalog is assembled by combining and winnowing pre-existing source catalogs. The initial, final, and supporting earthquake catalogs are made available here.

  9. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  10. Multivariate predictive model for dyslexia diagnosis.

    Science.gov (United States)

    Le Jan, Guylaine; Le Bouquin-Jeannès, Régine; Costet, Nathalie; Trolès, Nolwenn; Scalart, Pascal; Pichancourt, Dominique; Faucon, Gérard; Gombert, Jean-Emile

    2011-06-01

    Dyslexia is a specific disorder of language development that mainly affects reading. Etiological researches have led to multiple hypotheses which induced various diagnosis methods and rehabilitation treatments so that many different tests are used by practitioners to identify dyslexia symptoms. Our purpose is to determine a subset of the most efficient ones by integrating them into a multivariate predictive model. A set of screening tasks that are the most commonly used and representative of the different cognitive aspects of dyslexia was proposed to 78 children from elementary school (mean age = 9 years ± 7 months) exempt from identified reading difficulties and to 35 dyslexic children attending a specialized consultation for dyslexia. We proposed a multi-step procedure: within each category, we first selected the most representative tasks using principal component analysis and then we implemented logistic regression models on the preselected variables. Spelling and reading tasks were considered separately. The model with the best predictive performance includes eight variables from four categories of tasks and classifies correctly 94% of the children. The sensitivity (91%) and the specificity (95%) are both high. Forty minutes are necessary to complete the test.

  11. [Endometrial cancer: Predictive models and clinical impact].

    Science.gov (United States)

    Bendifallah, Sofiane; Ballester, Marcos; Daraï, Emile

    2017-12-01

    In France, in 2015, endometrial cancer (CE) is the first gynecological cancer in terms of incidence and the fourth cause of cancer of the woman. About 8151 new cases and nearly 2179 deaths have been reported. Treatments (surgery, external radiotherapy, brachytherapy and chemotherapy) are currently delivered on the basis of an estimation of the recurrence risk, an estimation of lymph node metastasis or an estimate of survival probability. This risk is determined on the basis of prognostic factors (clinical, histological, imaging, biological) taken alone or grouped together in the form of classification systems, which are currently insufficient to account for the evolutionary and prognostic heterogeneity of endometrial cancer. For endometrial cancer, the concept of mathematical modeling and its application to prediction have developed in recent years. These biomathematical tools have opened a new era of care oriented towards the promotion of targeted therapies and personalized treatments. Many predictive models have been published to estimate the risk of recurrence and lymph node metastasis, but a tiny fraction of them is sufficiently relevant and of clinical utility. The optimization tracks are multiple and varied, suggesting the possibility in the near future of a place for these mathematical models. The development of high-throughput genomics is likely to offer a more detailed molecular characterization of the disease and its heterogeneity. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  12. New Time-independent and Time-dependent Seismic Source Models for the Calabria Region (Italy) for the Probabilistic Seismic Hazard Maps

    Science.gov (United States)

    Akinci, Aybige; Burrato, Pierfrancesco; Falcone, Giuseppe; Mariucci, Maria Teresa; Murru, Maura; Tiberti, Mara Monica; Vannoli, Paola

    2015-04-01

    -type stochastic models (Brownian Passage Time, BPT and BPT+DCFF) together with the numerous rupture sources (full-rupture and floating partial ruptures) for the possible earthquake rupture forecasting in the Calabria. Ground Motion Predictive Equations are adopted among those properly address the seismotectonic features of the region (active shallow crustal regions, continental, subduction zones etc.) considered for hazard assessment during the EU-SHARE project. The mentioned ingredients are handled through logic tree branches for the probabilistic seismic hazard assessment and the final results will be presented in terms of peak ground acceleration (PGA) and spectral ordinates of response spectra with damping % 5 (Spectral Acceleration) on rock having 81%, 10%, 5% and 2% probability of exceedance for a time period of 50 years starting in 2015.

  13. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  14. Weight Control of Sports Training Chaos Predicting Model

    National Research Council Canada - National Science Library

    Yanping Tang; Guanghui Li

    2014-01-01

    As the sports training predicting model based on chaos local predicting method still has the low predicting accuracy and the slow function speed problems, this paper proposes a sports training chaos...

  15. Tsunami-hazard assessment based on subaquatic slope-failure susceptibility and tsunami-inundation modeling

    Science.gov (United States)

    Anselmetti, Flavio; Hilbe, Michael; Strupler, Michael; Baumgartner, Christoph; Bolz, Markus; Braschler, Urs; Eberli, Josef; Liniger, Markus; Scheiwiller, Peter; Strasser, Michael

    2015-04-01

    Due to their smaller dimensions and confined bathymetry, lakes act as model oceans that may be used as analogues for the much larger oceans and their margins. Numerous studies in the perialpine lakes of Central Europe have shown that their shores were repeatedly struck by several-meters-high tsunami waves, which were caused by subaquatic slides usually triggered by earthquake shaking. A profound knowledge of these hazards, their intensities and recurrence rates is needed in order to perform thorough tsunami-hazard assessment for the usually densely populated lake shores. In this context, we present results of a study combining i) basinwide slope-stability analysis of subaquatic sediment-charged slopes with ii) identification of scenarios for subaquatic slides triggered by seismic shaking, iii) forward modeling of resulting tsunami waves and iv) mapping of intensity of onshore inundation in populated areas. Sedimentological, stratigraphical and geotechnical knowledge of the potentially unstable sediment drape on the slopes is required for slope-stability assessment. Together with critical ground accelerations calculated from already failed slopes and paleoseismic recurrence rates, scenarios for subaquatic sediment slides are established. Following a previously used approach, the slides are modeled as a Bingham plastic on a 2D grid. The effect on the water column and wave propagation are simulated using the shallow-water equations (GeoClaw code), which also provide data for tsunami inundation, including flow depth, flow velocity and momentum as key variables. Combining these parameters leads to so called «intensity maps» for flooding that provide a link to the established hazard mapping framework, which so far does not include these phenomena. The current versions of these maps consider a 'worst case' deterministic earthquake scenario, however, similar maps can be calculated using probabilistic earthquake recurrence rates, which are expressed in variable amounts of

  16. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...

  17. Cadmium-hazard mapping using a general linear regression model (Irr-Cad) for rapid risk assessment.

    Science.gov (United States)

    Simmons, Robert W; Noble, Andrew D; Pongsakul, P; Sukreeyapongse, O; Chinabut, N

    2009-02-01

    Research undertaken over the last 40 years has identified the irrefutable relationship between the long-term consumption of cadmium (Cd)-contaminated rice and human Cd disease. In order to protect public health and livelihood security, the ability to accurately and rapidly determine spatial Cd contamination is of high priority. During 2001-2004, a General Linear Regression Model Irr-Cad was developed to predict the spatial distribution of soil Cd in a Cd/Zn co-contaminated cascading irrigated rice-based system in Mae Sot District, Tak Province, Thailand (Longitude E 98 degrees 59'-E 98 degrees 63' and Latitude N 16 degrees 67'-16 degrees 66'). The results indicate that Irr-Cad accounted for 98% of the variance in mean Field Order total soil Cd. Preliminary validation indicated that Irr-Cad 'predicted' mean Field Order total soil Cd, was significantly (p channels and subsequent inter-field irrigation flows. This in turn determines Field Order in Irrigation Sequence (Field Order(IS)). Mean Field Order total soil Cd represents the mean total soil Cd (aqua regia-digested) for a given Field Order(IS). In 2004-2005, Irr-Cad was utilized to evaluate the spatial distribution of total soil Cd in a 'high-risk' area of Mae Sot District. Secondary validation on six randomly selected field groups verified that Irr-Cad predicted mean Field Order total soil Cd and was significantly (p strategic sampling of all primary fields and laboratory based determination of total soil Cd (T-Cd(P)) and the use of a weighed coefficient for Cd (Coeff(W)). The use of primary fields as the basis for Irr-Cad is also an important practical consideration due to their inherent ease of identification and vital role in the classification of fields in terms of Field Order(IS). The inclusion of mean field order soil pH (1:5(water)) to the Irr-Cad model accounted for over 79% of the variation in mean Field Order bio-available (DTPA (diethylenetriaminepentaacetic acid)-extractable) soil Cd. Rice is the

  18. Predictive modelling of boiler fouling. Final report.

    Energy Technology Data Exchange (ETDEWEB)

    Chatwani, A

    1990-12-31

    A spectral element method embodying Large Eddy Simulation based on Re- Normalization Group theory for simulating Sub Grid Scale viscosity was chosen for this work. This method is embodied in a computer code called NEKTON. NEKTON solves the unsteady, 2D or 3D,incompressible Navier Stokes equations by a spectral element method. The code was later extended to include the variable density and multiple reactive species effects at low Mach numbers, and to compute transport of large particles governed by inertia. Transport of small particles is computed by treating them as trace species. Code computations were performed for a number of test conditions typical of flow past a deep tube bank in a boiler. Results indicate qualitatively correct behavior. Predictions of deposition rates and deposit shape evolution also show correct qualitative behavior. These simulations are the first attempts to compute flow field results at realistic flow Reynolds numbers of the order of 10{sup 4}. Code validation was not done; comparison with experiment also could not be made as many phenomenological model parameters, e.g., sticking or erosion probabilities and their dependence on experimental conditions were not known. The predictions however demonstrate the capability to predict fouling from first principles. Further work is needed: use of large or massively parallel machine; code validation; parametric studies, etc.

  19. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and tw...

  20. Tsunami Hazard Preventing Based Land Use Planning Model Using GIS Techniques in Muang Krabi, Thailand

    Directory of Open Access Journals (Sweden)

    Abdul Salam Soomro

    2012-10-01

    Full Text Available The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region.

  1. Lithosphere Deformation Modelling of the Italian Peninsula: A Tool for Seismic Hazard Assessment

    Science.gov (United States)

    Jiménez-Munt, I.; Pagani, M.; Marcellini, A.; Sabadini, R.

    2003-04-01

    Large uncertainties of input data for seismic hazard assessment, like the incompleteness of the historical catalogues, magnitude estimation uncertainties and low reliability of epicentral location forces the adoption of additional information for the characterization of seismicity. Lithosphere deformation measuring and modelling techniques were largely increased and improved during the last decade; the detail and the reliability of the deformation models presently available make possible the development of methodologies for constraining the geodynamical evolution of active areas and particularly of their long term seismotectonic behaviour. In the present work we describe and test a procedure to create a probabilistic hazard source model for some areas of the Italian peninsula by integrating crustal deformation modelling results with historical evidences of earthquake occurrence. This procedure uses a Bayesian approach with geophysical input to define the earthquake occurrence behaviour: historical earthquake data characterize the sample likelihood function while strain derived occurrence rates define prior distribution parameters. Modelled strain rates are calculated by means of a finite element model based on the thin shell scheme of peninsular Italy and central Mediterranean that simulates the effects of Africa-Eurasia convergence and subduction underneath the Calabrian Arc on lithospheric deformation. The computed horizontal velocities and strain rates are compared with their geodetic counterparts retrieved by a network of permanent GPS receivers in the area. In order to reproduce the clockwise rotation in southern Italy from the NNW direction of the NOTO site (south eastern Sicily), envisaging the motion of the Africa plate, to the NNE direction of GPS sites in Calabria and Matera, the effects of the Calabrian subduction must be completed by the effects of the counter-clockwise rotation of the Adria microplate. This kinematics of Adria reproduces the NNE motion

  2. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  3. Hazard-consistent ground motions generated with a stochastic fault-rupture model

    Energy Technology Data Exchange (ETDEWEB)

    Nishida, Akemi, E-mail: nishida.akemi@jaea.go.jp [Center for Computational Science and e-Systems, Japan Atomic Energy Agency, 178-4-4, Wakashiba, Kashiwa, Chiba 277-0871 (Japan); Igarashi, Sayaka, E-mail: igrsyk00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Sakamoto, Shigehiro, E-mail: shigehiro.sakamoto@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Uchiyama, Yasuo, E-mail: yasuo.uchiyama@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Yamamoto, Yu, E-mail: ymmyu-00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Muramatsu, Ken, E-mail: kmuramat@tcu.ac.jp [Department of Nuclear Safety Engineering, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo 158-8557 (Japan); Takada, Tsuyoshi, E-mail: takada@load.arch.t.u-tokyo.ac.jp [Department of Architecture, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-12-15

    Conventional seismic probabilistic risk assessments (PRAs) of nuclear power plants consist of probabilistic seismic hazard and fragility curves. Even when earthquake ground-motion time histories are required, they are generated to fit specified response spectra, such as uniform hazard spectra at a specified exceedance probability. These ground motions, however, are not directly linked with seismic-source characteristics. In this context, the authors propose a method based on Monte Carlo simulations to generate a set of input ground-motion time histories to develop an advanced PRA scheme that can explain exceedance probability and the sequence of safety-functional loss in a nuclear power plant. These generated ground motions are consistent with seismic hazard at a reference site, and their seismic-source characteristics can be identified in detail. Ground-motion generation is conducted for a reference site, Oarai in Japan, the location of a hypothetical nuclear power plant. A total of 200 ground motions are generated, ranging from 700 to 1100 cm/s{sup 2} peak acceleration, which corresponds to a 10{sup −4} to 10{sup −5} annual exceedance frequency. In the ground-motion generation, seismic sources are selected according to their hazard contribution at the site, and Monte Carlo simulations with stochastic parameters for the seismic-source characteristics are then conducted until ground motions with the target peak acceleration are obtained. These ground motions are selected so that they are consistent with the hazard. Approximately 110,000 simulations were required to generate 200 ground motions with these peak accelerations. Deviations of peak ground motion acceleration generated for 1000–1100 cm/s{sup 2} range from 1.5 to 3.0, where the deviation is evaluated with peak ground motion accelerations generated from the same seismic source. Deviations of 1.0 to 3.0 for stress drops, one of the stochastic parameters of seismic-source characteristics, are required to

  4. Well-conditioned model predictive control.

    Science.gov (United States)

    Dubay, Rickey; Kember, Guy; Pramujati, Bambang

    2004-01-01

    Model-based predictive control is an advanced control strategy that uses a move suppression factor or constrained optimization methods for achieving satisfactory closed-loop dynamic responses of complex systems. While these approaches are suitable for many processes, they are formulated on the selection of certain parameters that are ambiguous and also computationally demanding which makes them less suited for tight control of fast processes. In this paper, a new dynamic matrix control (DMC) algorithm is proposed that reduces inherent ill-conditioning by allowing the process prediction time step to exceed the control time step. The main feature, that stands in contrast with current DMC approaches, is that the original open-loop data are used to evaluate a "shifting factor" m in the controller matrix where m replaces the move suppression coefficient. The new control algorithm is practically demonstrated on a fast reacting process with better control being realized in comparison with DMC using move suppression. The algorithm also gives improved closed-loop responses for control simulations on a multivariable nonlinear process having variable dead-time, and on other models found in the literature. The shifting factor m is generic and can be effectively applied for any control horizon.

  5. Active fault characterization throughout the Caribbean and Central America for seismic hazard modeling

    Science.gov (United States)

    Styron, Richard; Pagani, Marco; Garcia, Julio

    2017-04-01

    The region encompassing Central America and the Caribbean is tectonically complex, defined by the Caribbean plate's interactions with the North American, South American and Cocos plates. Though active deformation over much of the region has received at least cursory investigation the past 50 years, the area is chronically understudied and lacks a modern, synoptic characterization. Regardless, the level of risk in the region - as dramatically demonstrated by the 2010 Haiti earthquake - remains high because of high-vulnerability buildings and dense urban areas home to over 100 million people, who are concentrated near plate boundaries and other major structures. As part of a broader program to study seismic hazard worldwide, the Global Earthquake Model Foundation is currently working to quantify seismic hazard in the region. To this end, we are compiling a database of active faults throughout the region that will be integrated into similar models as recently done in South America. Our initial compilation hosts about 180 fault traces in the region. The faults show a wide range of characteristics, reflecting the diverse styles of plate boundary and plate-margin deformation observed. Regional deformation ranges from highly localized faulting along well-defined strike-slip faults to broad zones of distributed normal or thrust faulting, and from readily-observable yet slowly-slipping structures to inferred faults with geodetically-measured slip rates >10 mm/yr but essentially no geomorphic expression. Furthermore, primary structures such as the Motagua-Polochic Fault Zone (the strike-slip plate boundary between the North American and Caribbean plates in Guatemala) display strong along-strike slip rate gradients, and many other structures are undersea for most or all of their length. A thorough assessment of seismic hazard in the region will require the integration of a range of datasets and techniques and a comprehensive characterization of epistemic uncertainties driving

  6. Predicting subsurface uranium transport: Mechanistic modeling constrained by experimental data

    Science.gov (United States)

    Ottman, Michael; Schenkeveld, Walter D. C.; Kraemer, Stephan

    2017-04-01

    Depleted uranium (DU) munitions and their widespread use throughout conflict zones around the world pose a persistent health threat to the inhabitants of those areas long after the conclusion of active combat. However, little emphasis has been put on developing a comprehensive, quantitative tool for use in remediation and hazard avoidance planning in a wide range of environments. In this context, we report experimental data on U interaction with soils and sediments. Here, we strive to improve existing risk assessment modeling paradigms by incorporating a variety of experimental data into a mechanistic U transport model for subsurface environments. 20 different soils and sediments from a variety of environments were chosen to represent a range of geochemical parameters that are relevant to U transport. The parameters included pH, organic matter content, CaCO3, Fe content and speciation, and clay content. pH ranged from 3 to 10, organic matter content from 6 to 120 g kg-1, CaCO3 from 0 to 700 g kg-1, amorphous Fe content from 0.3 to 6 g kg-1 and clay content from 4 to 580 g kg-1. Sorption experiments were then performed, and linear isotherms were constructed. Sorption experiment results show that among separate sets of sediments and soils, there is an inverse correlation between both soil pH and CaCO¬3 concentration relative to U sorptive affinity. The geological materials with the highest and lowest sorptive affinities for U differed in CaCO3 and organic matter concentrations, as well as clay content and pH. In a further step, we are testing if transport behavior in saturated porous media can be predicted based on adsorption isotherms and generic geochemical parameters, and comparing these modeling predictions with the results from column experiments. The comparison of these two data sets will examine if U transport can be effectively predicted from reactive transport modeling that incorporates the generic geochemical parameters. This work will serve to show

  7. Prediction of the thermal decomposition of organic peroxides by validated QSPR models

    Energy Technology Data Exchange (ETDEWEB)

    Prana, Vinca [Institut de Recherche de Chimie Paris, Chimie ParisTech CNRS, 11 rue P. et M. Curie, Paris 75005 (France); Institut National de l’Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP2, Verneuil-en-Halatte 60550 (France); Rotureau, Patricia, E-mail: patricia.rotureau@ineris.fr [Institut National de l’Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP2, Verneuil-en-Halatte 60550 (France); Fayet, Guillaume [Institut National de l’Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP2, Verneuil-en-Halatte 60550 (France); André, David; Hub, Serge [ARKEMA, rue Henri Moissan, BP63, Pierre Benite 69493 (France); Vicot, Patricia [Institut National de l’Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP2, Verneuil-en-Halatte 60550 (France); Rao, Li [Institut de Recherche de Chimie Paris, Chimie ParisTech CNRS, 11 rue P. et M. Curie, Paris 75005 (France); Adamo, Carlo [Institut de Recherche de Chimie Paris, Chimie ParisTech CNRS, 11 rue P. et M. Curie, Paris 75005 (France); Institut Universitaire de France, 103 Boulevard Saint Michel, Paris F-75005 (France)

    2014-07-15

    Highlights: • QSPR models were developed for thermal stability of organic peroxides. • Two accurate MLR models were exhibited based on quantum chemical descriptors. • Performances were evaluated by a series of internal and external validations. • The new QSPR models satisfied all OCDE principles of validation for regulatory use. - Abstract: Organic peroxides are unstable chemicals which can easily decompose and may lead to explosion. Such a process can be characterized by physico-chemical parameters such as heat and temperature of decomposition, whose determination is crucial to manage related hazards. These thermal stability properties are also required within many regulatory frameworks related to chemicals in order to assess their hazardous properties. In this work, new quantitative structure–property relationships (QSPR) models were developed to predict accurately the thermal stability of organic peroxides from their molecular structure respecting the OECD guidelines for regulatory acceptability of QSPRs. Based on the acquisition of 38 reference experimental data using DSC (differential scanning calorimetry) apparatus in homogenous experimental conditions, multi-linear models were derived for the prediction of the decomposition heat and the onset temperature using different types of molecular descriptors. Models were tested by internal and external validation tests and their applicability domains were defined and analyzed. Being rigorously validated, they presented the best performances in terms of fitting, robustness and predictive power and the descriptors used in these models were linked to the peroxide bond whose breaking represents the main decomposition mechanism of organic peroxides.

  8. A "mental models" approach to the communication of subsurface hydrology and hazards

    Science.gov (United States)

    Gibson, Hazel; Stewart, Iain S.; Pahl, Sabine; Stokes, Alison

    2016-05-01

    Communicating information about geological and hydrological hazards relies on appropriately worded communications targeted at the needs of the audience. But what are these needs, and how does the geoscientist discern them? This paper adopts a psychological "mental models" approach to assess the public perception of the geological subsurface, presenting the results of attitudinal studies and surveys in three communities in the south-west of England. The findings reveal important preconceptions and misconceptions regarding the impact of hydrological systems and hazards on the geological subsurface, notably in terms of the persistent conceptualisation of underground rivers and the inferred relations between flooding and human activity. The study demonstrates how such mental models can provide geoscientists with empirical, detailed and generalised data of perceptions surrounding an issue, as well reveal unexpected outliers in perception that they may not have considered relevant, but which nevertheless may locally influence communication. Using this approach, geoscientists can develop information messages that more directly engage local concerns and create open engagement pathways based on dialogue, which in turn allow both geoscience "experts" and local "non-experts" to come together and understand each other more effectively.

  9. Perspectives of widely scalable exposure models for multi-hazard global risk assessment

    Science.gov (United States)

    Pittore, Massimiliano; Haas, Michael; Wieland, Marc

    2017-04-01

    Less than 5% of eart&hacute;s surface is urbanized, and currently hosts around 7.5 billion people, with these figures constantly changing as increasingly faster urbanization takes place. A significant percentage of this population, often in economically developing countries, is exposed to different natural hazards which contribute to further raise the bar on the expected economic and social consequences. Global initiatives such as GAR 15 advocate for a wide scale, possibly global perspective on the assessment of risk arising from natural hazards, as a way to increase the risk-awareness of decision-makers and stakeholders, and to better harmonize large-scale prevention and mitigation actions. Realizing, and even more importantly maintaining a widely-scalable exposure model suited for the assessment of different natural risks would allow large-scale quantitative risk and loss assessment in a more efficient and reliable way. Considering its complexity and extent, such a task is undoubtedly a challenging one, spanning across multiple disciplines and operational contexts. On the other hand, with a careful design and an efficient and scalable implementation such endeavour would be well within reach and would contribute to significantly improve our understanding of the mechanisms lying behind what we call natural catastrophes. In this contribution we'll review existing relevant applications, will discuss how to tackle the most critical issues and will outline a road map for the implementation of global-scoped exposure models.

  10. Establishment and Application of Coalmine Gas Prediction Model Based on Multi-Sensor Data Fusion Technology

    Directory of Open Access Journals (Sweden)

    Wenyu Lv

    2014-04-01

    Full Text Available Undoubtedly an accident involving gas is one of the greater disasters that can occur in a coalmine, thus being able to predict when an accident involving gas might occur is an essential aspect in loss prevention and the reduction of safety hazards. However, the traditional methods concerning gas safety prediction is hindered by multi-objective and non-continuous problem. The coalmine gas prediction model based on multi-sensor data fusion technology (CGPM-MSDFT was established through analysis of accidents involving gas using artificial neural network to fuse multi- sensor data, using an improved algorithm designed to train the network and using an early stop method to resolve the over-fitting problem, the network test and field application results show that this model can provide a new direction for research into predicting the likelihood of a gas related incident within a coalmine. It will have a broad application prospect in coal mining.

  11. Novel technologies and an overall strategy to allow hazard assessment and risk prediction of chemicals, cosmetics, and drugs with animal-free methods.

    Science.gov (United States)

    Leist, Marcel; Lidbury, Brett A; Yang, Chihae; Hayden, Patrick J; Kelm, Jens M; Ringeissen, Stephanie; Detroyer, Ann; Meunier, Jean R; Rathman, James F; Jackson, George R; Stolper, Gina; Hasiwa, Nina

    2012-01-01

    Several alternative methods to replace animal experiments have been accepted by legal bodies. An even larger number of tests are under development or already in use for non-regulatory applications or for the generation of information stored in proprietary knowledge bases. The next step for the use of the different in vitro methods is their combination into integrated testing strategies (ITS) to get closer to the overall goal of predictive "in vitro-based risk evaluation processes." We introduce here a conceptual framework as the basis for future ITS and their use for risk evaluation without animal experiments. The framework allows incorporation of both individual tests and already integrated approaches. Illustrative examples for elements to be incorporated are drawn from the session "Innovative technologies" at the 8th World Congress on Alternatives and Animal Use in the Life Sciences, held in Montreal, 2011. For instance, LUHMES cells (conditionally immortalized human neurons) were presented as an example for a 2D cell system. The novel 3D platform developed by InSphero was chosen as an example for the design and use of scaffold-free, organotypic microtissues. The identification of critical pathways of toxicity (PoT) may be facilitated by approaches exemplified by the MatTek 3D model for human epithelial tissues with engineered toxicological reporter functions. The important role of in silico methods and of modeling based on various pre-existing data is demonstrated by Altamira's comprehensive approach to predicting a molecule's potential for skin irritancy. A final example demonstrates how natural variation in human genetics may be overcome using data analytic (pattern recognition) techniques borrowed from computer science and statistics. The overall hazard and risk assessment strategy integrating these different examples has been compiled in a graphical work flow.

  12. Prediction model for penile prosthesis implantation for erectile dysfunction management.

    Science.gov (United States)

    Segal, Robert L; Camper, Stephen B; Ma, Larry; Burnett, Arthur L

    2014-10-01

    Penile prosthesis surgery is indicated based on undesirability, contraindication or ineffectiveness of non-surgical options for erectile dysfunction. This definitive treatment is often delayed after initial diagnosis. Our objective was to develop a prediction tool based on a patient's clinical history to determine likelihood of ultimately receiving a penile prosthesis. This retrospective analysis used claims data from Commercial and Medicare supplemental databases. Inclusion criteria were 18 years of age with 1 year of continuous enrollment at the first diagnosis of erectile dysfunction. Patients' demographics, co-morbidities and erectile dysfunction therapy were derived based on enrollment, medical and prescription histories. The Cox proportional hazards model with stepwise selection was used to identify and quantify (using relative risk) factors associated with a future penile prosthesis implant. Co-morbidities and therapies present prior to the index erectile dysfunction diagnosis were analyzed as fixed covariates. Approximately 1% of the dataset's population (N = 310,303 Commercial, N = 74,315 Medicare, respectively) underwent penile prosthesis implantation during the study period (3928 patients in the overall population: 2405 patients [0.78%] in the Commercial and 1523 patients [2.05%] in the Medicare population). Factors with the greatest predictive strength of penile prosthesis implantation included prostate cancer diagnosis (relative risk: 3.93, 2.29; 95% CI, 3.57-4.34, 2.03-2.6), diabetes mellitus (2.31, 1.23; 2.12-2.52, 1.1-1.37) and previous treatment with first-line therapy (1.39, 1.33; 1.28-1.5, 1.2-1.47) (all P prosthesis. Calculating the likelihood of penile prosthesis implantation based on the weight of these factors may assist clinicians with the definition of a care plan and patient counseling. The precision of the model may be limited by factors beyond medical history information that possibly influence the decision to proceed to

  13. Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models.

    Science.gov (United States)

    Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin

    2017-01-01

    In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve.

  14. Modeling of the Sedimentary Interbedded Basalt Stratigraphy for the Idaho National Laboratory Probabilistic Seismic Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Suzette Payne

    2007-08-01

    This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.

  15. Modeling of the Sedimentary Interbedded Basalt Stratigraphy for the Idaho National Laboratory Probabilistic Seismic Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Suzette Payne

    2006-04-01

    This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.

  16. The unconvincing product - Consumer versus expert hazard identification: A mental models study of novel foods

    DEFF Research Database (Denmark)

    Hagemann, Kit; Scholderer, Joachim

    and experts understanding of benefits and risks associated with three Novel foods (a potato, rice and functional food ingredients) using a relatively new methodology for the study of risk perception called Mental models. Mental models focus on the way people conceptualise hazardous processes and allows......Novel foods have been the object of intense public debate in recent years. Despite efforts to communicate the outcomes of risk assessments to consumers, public confidence in the management of potential risks associated has been low. Various reasons behind this has identified, chiefly a disagreement...... offered by lifelong habits. Consumers found it utterly unconvincing that, all of a sudden, they should regard their everyday foods as toxic and therefore it might not be possible to effectively communicate the health benefits of some novel foods to consumers. Several misconceptions became apparent...

  17. A Quasi-Poisson Approach on Modeling Accident Hazard Index for Urban Road Segments

    Directory of Open Access Journals (Sweden)

    Lu Ma

    2014-01-01

    Full Text Available In light of the recently emphasized studies on risk evaluation of crashes, accident counts under specific transportation facilities are adopted to reflect the chance of crash occurrence. The current study introduces more comprehensive measure with the supplement information of accidental harmfulness into the expression of accident risks which are also named Accident Hazard Index (AHI in the following context. Before the statistical analysis, datasets from various sources are integrated under a GIS platform and the corresponding procedures are presented as an illustrated example for similar analysis. Then, a quasi-Poisson regression model is suggested for analyses and the results show that the model is appropriate for dealing with overdispersed count data and several key explanatory variables were found to have significant impact on the estimation of AHI. In addition, the effect of weight on different severity levels of accidents is examined and the selection of the weight is also discussed.

  18. FLOOD HAZARD MAP IN THE CITY OF BATNA (ALGERIA BY HYDRAULIC MODELING APPROCH

    Directory of Open Access Journals (Sweden)

    Guellouh SAMI

    2016-06-01

    Full Text Available In the light of the global climatic changes that appear to influence the frequency and the intensity of floods, and whose damages are still growing; understanding the hydrological processes, their spatiotemporal setting and their extreme shape, became a paramount concern to local communities in forecasting terms. The aim of this study is to map the floods hazard using a hydraulic modeling method. In fact, using the operating Geographic Information System (GIS, would allow us to perform a more detailed spatial analysis about the extent of the flooding risk, through the approval of the hydraulic modeling programs in different frequencies. Based on the results of this analysis, decision makers can implement a strategy of risk management related to rivers overflowing through the city of Batna.

  19. A Computational Model for Predicting Gas Breakdown

    Science.gov (United States)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  20. Heuristic Modeling for TRMM Lifetime Predictions

    Science.gov (United States)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  1. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost......We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal...... savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation...

  2. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid, arising...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....

  3. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  4. Which method predicts recidivism best?: A comparison of statistical, machine learning, and data mining predictive models

    OpenAIRE

    Tollenaar, N.; van der Heijden, P.G.M.

    2012-01-01

    Using criminal population conviction histories of recent offenders, prediction mod els are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining and machine learning provide an improvement in predictive performance over classical statistical methods, namely logistic regression and linear discrim inant analysis. These models are compared ...

  5. Biomarkers As Predicting Models Of Stroke Incidence

    Directory of Open Access Journals (Sweden)

    Abbas Ghorbani

    2017-02-01

    Full Text Available OBJECTIVES: biomarkers refer to indicators measured by chemical or biologic tests using blood or urine .  The predicts physiologic or disease states, or increased disease risk.Risk stratification of persons at risk of future vascular event can separate subpopulations that would benefit most from established and emerging stoke preventive therapies. METHODS: Biomarkers representing various components of the inflammatory cascade, including: 1-systemic inflammation (c-reactive protein{CRP}, interleukin 6, monocyte chemotactic protein1,tumor necrosis factor α1,tumor necrosis factor recepror2{TNFR2}, osteoprotegrin, fibrinogen 2-vascular inflammation/ endothelial dysfunction (intercellular adhesion molecule1,CD40 ligand, P-selectin,lipoprotein associated phospholipaseA  mass and activity,total homocysteine (tHcy,and vascular growth factor(VEGF,oxidative stress(myeloperoxidase RESULTS: circulating biomarkers of inflammation and endothelial dysfunction are associated with ischemic stroke in stroke free community- dwelling individuals and they can be used to refine stroke prediction models inclusion of 4 biomarkers (CRP-TNFR2-tHcy-VEGF DISCUSSION: Athough the roles of biomarkers are basically diagnosing the disease and predicting the outcome , biomarkers inpatients with stroke can also provide a large variety of other information about the risk of future stroke, possible stroke mechanisms for biomarker-guided treatment. Among circulating biomarkers  VEGF was the biomarker that had the greatest individual degree of discrimination for future ischemic stroke. Data from studies  have recently demonstrated  the relation between  VEGF and ischemic stroke pathogenesis is, however not well established.total homocysteine and CRP are well established markers of increase stroke risk , the former via its role in accelerated atherosclerotic disease and the latter marking systemic inflammation and plague instability. It has been demonstrated independent

  6. Tsunami hazard assessment in El Salvador, Central America, from seismic sources through flooding numerical models.

    Science.gov (United States)

    Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.

    2013-11-01

    El Salvador is the smallest and most densely populated country in Central America; its coast has an approximate length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there were 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and resulting in hundreds of victims. Hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached through both probabilistic and deterministic methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold: on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high-resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps, and from the elevation in the near shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific Basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences-finite volumes numerical model in this work, based on the linear and non-linear shallow water equations, to simulate a total of 24 earthquake-generated tsunami scenarios. Our results show that at the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results

  7. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  8. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  9. Impact of a refined airborne LiDAR stochastic model for natural hazard applications

    Science.gov (United States)

    Glennie, C. L.; Bolkas, D.; Fotopoulos, G.

    2016-12-01

    Airborne Light Detection and Ranging (LiDAR) is often employed to derive multi-temporal Digital Elevation Models (DEMs), that are used to estimate vertical displacement resulting from natural hazards such as landslides, rockfalls and erosion. Vertical displacements are estimated by computing the difference between two DEMs separated by a specified time period and applying a threshold to remove the inherent noise. Thus, reliable information about the accuracy of DEMs is essential. The assessment of airborne LiDAR errors is typically based on (i) independent ground control points (ii) forward error propagation utilizing the LiDAR geo-referencing equation. The latter approach is dependent on the stochastic model information of the LiDAR measurements. Furthermore, it provides the user with point-by-point accuracy estimation. In this study, a refined stochastic model is obtained through variance component estimation (VCE) for a dataset in Houston, Texas. Results show that initial stochastic information was optimistic by 35% for both horizontal coordinates and ellipsoidal heights. To assess the impact of a refined stochastic model, surface displacement simulations are evaluated. The simulations include scenarios with topographic slopes that vary from 10º to 60º, and vertical displacement of ±1 to ±5 m. Results highlight the cases where a reliable stochastic model is important. A refined stochastic model can be used in practical applications for determining appropriate noise thresholds in vertical displacement, improve quantitative analysis, and enhance relevant decision-making.

  10. Spatio-Temporal Risk Assessment Process Modeling for Urban Hazard Events in Sensor Web Environment

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-11-01

    Full Text Available Immediate risk assessment and analysis are crucial in managing urban hazard events (UHEs. However, it is a challenge to develop an immediate risk assessment process (RAP that can integrate distributed sensors and data to determine the uncertain model parameters of facilities, environments, and populations. To solve this problem, this paper proposes a RAP modeling method within a unified spatio-temporal framework and forms a 10-tuple process information description structure based on a Meta-Object Facility (MOF. A RAP is designed as an abstract RAP chain that collects urban information resources and performs immediate risk assessments. In addition, we propose a prototype system known as Risk Assessment Process Management (RAPM to achieve the functions of RAP modeling, management, execution and visualization. An urban gas leakage event is simulated as an example in which individual risk and social risk are used to illustrate the applicability of the RAP modeling method based on the 10-tuple metadata framework. The experimental results show that the proposed RAP immediately assesses risk by the aggregation of urban sensors, data, and model resources. Moreover, an extension mechanism is introduced in the spatio-temporal RAP modeling method to assess risk and to provide decision-making support for different UHEs.

  11. Estimation in the cox proportional hazards model with left-truncated and interval-censored data.

    Science.gov (United States)

    Pan, Wei; Chappell, Rick

    2002-03-01

    We show that the nonparametric maximum likelihood estimate (NPMLE) of the regression coefficient from the joint likelihood (of the regression coefficient and the baseline survival) works well for the Cox proportional hazards model with left-truncated and interval-censored data, but the NPMLE may underestimate the baseline survival. Two alternatives are also considered: first, the marginal likelihood approach by extending Satten (1996, Biometrika 83, 355-370) to truncated data, where the baseline distribution is eliminated as a nuisance parameter; and second, the monotone maximum likelihood estimate that maximizes the joint likelihood by assuming that the baseline distribution has a nondecreasing hazard function, which was originally proposed to overcome the underestimation of the survival from the NPMLE for left-truncated data without covariates (Tsai, 1988, Biometrika 75, 319-324). The bootstrap is proposed to draw inference. Simulations were conducted to assess their performance. The methods are applied to the Massachusetts Health Care Panel Study data set to compare the probabilities of losing functional independence for male and female seniors.

  12. A Gis Model Application Supporting The Analysis of The Seismic Hazard For The Urban Area of Catania (italy)

    Science.gov (United States)

    Grasso, S.; Maugeri, M.

    rigorous complex methods of analysis or qualitative procedures. A semi quantitative procedure based on the definition of the geotechnical hazard index has been applied for the zonation of the seismic geotechnical hazard of the city of Catania. In particular this procedure has been applied to define the influence of geotechnical properties of soil in a central area of the city of Catania, where some historical buildings of great importance are sited. It was also performed an investigation based on the inspection of more than one hundred historical ecclesiastical buildings of great importance, located in the city. Then, in order to identify the amplification effects due to the site conditions, a geotechnical survey form was prepared, to allow a semi quantitative evaluation of the seismic geotechnical hazard for all these historical buildings. In addition, to evaluate the foundation soil time -history response, a 1-D dynamic soil model was employed for all these buildings, considering the non linearity of soil behaviour. Using a GIS, a map of the seismic geotechnical hazard, of the liquefaction hazard and a preliminary map of the seismic hazard for the city of Catania have been obtained. From the analysis of obtained results it may be noticed that high hazard zones are mainly clayey sites

  13. Analysis of two-phase sampling data with semiparametric additive hazards models.

    Science.gov (United States)

    Sun, Yanqing; Qian, Xiyuan; Shou, Qiong; Gilbert, Peter B

    2017-07-01

    Under the case-cohort design introduced by Prentice (Biometrica 73:1-11, 1986), the covariate histories are ascertained only for the subjects who experience the event of interest (i.e., the cases) during the follow-up period and for a relatively small random sample from the original cohort (i.e., the subcohort). The case-cohort design has been widely used in clinical and epidemiological studies to assess the effects of covariates on failure times. Most statistical methods developed for the case-cohort design use the proportional hazards model, and few methods allow for time-varying regression coefficients. In addition, most methods disregard data from subjects outside of the subcohort, which can result in inefficient inference. Addressing these issues, this paper proposes an estimation procedure for the semiparametric additive hazards model with case-cohort/two-phase sampling data, allowing the covariates of interest to be missing for cases as well as for non-cases. A more flexible form of the additive model is considered that allows the effects of some covariates to be time varying while specifying the effects of others to be constant. An augmented inverse probability weighted estimation procedure is proposed. The proposed method allows utilizing the auxiliary information that correlates with the phase-two covariates to improve efficiency. The asymptotic properties of the proposed estimators are established. An extensive simulation study shows that the augmented inverse probability weighted estimation is more efficient than the widely adopted inverse probability weighted complete-case estimation method. The method is applied to analyze data from a preventive HIV vaccine efficacy trial.

  14. Seismic Hazard of the Uttarakhand Himalaya, India, from Deterministic Modeling of Possible Rupture Planes in the Area

    Directory of Open Access Journals (Sweden)

    Anand Joshi

    2013-01-01

    Full Text Available This paper presents use of semiempirical method for seismic hazard zonation. The seismotectonically important region of Uttarakhand Himalaya has been considered in this work. Ruptures along the lineaments in the area identified from tectonic map are modeled deterministically using semi empirical approach given by Midorikawa (1993. This approach makes use of attenuation relation of peak ground acceleration for simulating strong ground motion at any site. Strong motion data collected over a span of three years in this region have been used to develop attenuation relation of peak ground acceleration of limited magnitude and distance applicability. The developed attenuation relation is used in the semi empirical method to predict peak ground acceleration from the modeled rupture planes in the area. A set of values of peak ground acceleration from possible ruptures in the area at the point of investigation is further used to compute probability of exceedance of peak ground acceleration of values 100 and 200 gals. The prepared map shows that regions like Tehri, Chamoli, Almora, Srinagar, Devprayag, Bageshwar, and Pauri fall in a zone of 10% probability of exceedence of peak ground acceleration of value 200 gals.

  15. Multi-axis model predictive contouring control

    Science.gov (United States)

    Lam, Denise; Manzie, Chris; Good, Malcolm C.

    2013-08-01

    Contouring systems involve competing control objectives of maximising accuracy while minimising traversal time. A previously developed model predictive contouring controller for biaxial systems is extended to multi-axis systems subject to joint acceleration and jerk constraints. This requires consideration of manipulator forward kinematics and both position and orientation of the end effector. The control design is based on minimising a cost function which reflects the trade-off between the control objectives. A new architecture is proposed where the joint position controllers operate at a sample rate comparable to industrial machines, while the contouring control scheme operates at a slower rate. The proposed approach is applied to a simulation model of an industrial profile cutting machine. A number of implementations are presented requiring varying degrees of modification to the existing machine hardware and sensing capability. Results demonstrate the effect of the cost function weights on contouring accuracy and traversal time, as well as the trade-off between achieving the best contouring performance and minimising modification of the existing system.

  16. Probabilistic forecasts of debris-flow hazard at the regional scale with a combination of models.

    Science.gov (United States)

    Malet, Jean-Philippe; Remaître, Alexandre

    2015-04-01

    Debris flows are one of the many active slope-forming processes in the French Alps, where rugged and steep slopes mantled by various slope deposits offer a great potential for triggering hazardous events. A quantitative assessment of debris-flow hazard requires the estimation, in a probabilistic framework, of the spatial probability of occurrence of source areas, the spatial probability of runout areas, the temporal frequency of events, and their intensity. The main objective of this research is to propose a pipeline for the estimation of these quantities at the region scale using a chain of debris-flow models. The work uses the experimental site of the Barcelonnette Basin (South French Alps), where 26 active torrents have produced more than 150 debris-flow events since 1850 to develop and validate the methodology. First, a susceptibility assessment is performed to identify the debris-flow prone source areas. The most frequently used approach is the combination of environmental factors with GIS procedures and statistical techniques, integrating or not, detailed event inventories. Based on a 5m-DEM and derivatives, and information on slope lithology, engineering soils and landcover, the possible source areas are identified with a statistical logistic regression model. The performance of the statistical model is evaluated with the observed distribution of debris-flow events recorded after 1850 in the study area. The source areas in the three most active torrents (Riou-Bourdoux, Faucon, Sanières) are well identified by the model. Results are less convincing for three other active torrents (Bourget, La Valette and Riou-Chanal); this could be related to the type of debris-flow triggering mechanism as the model seems to better spot the open slope debris-flow source areas (e.g. scree slopes), but appears to be less efficient for the identification of landslide-induced debris flows. Second, a susceptibility assessment is performed to estimate the possible runout distance

  17. The Hazard Analysis and Critical Control Points (HACCP) generic model for the production of Thai fermented pork sausage (Nham).

    Science.gov (United States)

    Paukatong, K V; Kunawasen, S

    2001-01-01

    Nham is a traditional Thai fermented pork sausage. The major ingredients of Nham are ground pork meat and shredded pork rind. Nham has been reported to be contaminated with Salmonella spp., Staphylococcus aureus, and Listeria monocytogenes. Therefore, it is a potential cause of foodborne diseases for consumers. A Hazard Analysis and Critical Control Points (HACCP) generic model has been developed for the Nham process. Nham processing plants were observed and a generic flow diagram of Nham processes was constructed. Hazard analysis was then conducted. Other than microbial hazards, the pathogens previously found in Nham, sodium nitrite and metal were identified as chemical and physical hazards in this product, respectively. Four steps in the Nham process have been identified as critical control points. These steps are the weighing of the nitrite compound, stuffing, fermentation, and labeling. The chemical hazard of nitrite must be controlled during the weighing step. The critical limit of nitrite levels in the Nham mixture has been set at 100-200 ppm. This level is high enough to control Clostridium botulinum but does not cause chemical hazards to the consumer. The physical hazard from metal clips could be prevented by visual inspection of every Nham product during stuffing. The microbiological hazard in Nham could be reduced in the fermentation process. The critical limit of the pH of Nham was set at lower than 4.6. Since this product is not cooked during processing, finally, educating the consumer, by providing information on the label such as "safe if cooked before consumption", could be an alternative way to prevent the microbiological hazards of this product.

  18. Integrating expert opinion with modelling for quantitative multi-hazard risk assessment in the Eastern Italian Alps

    Science.gov (United States)

    Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.

    2016-11-01

    Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.

  19. Risk Evaluation of Debris Flow Hazard Based on Asymmetric Connection Cloud Model

    Directory of Open Access Journals (Sweden)

    Xinyu Xu

    2017-01-01

    Full Text Available Risk assessment of debris flow is a complex problem involving various uncertainty factors. Herein, a novel asymmetric cloud model coupled with connection number was described here to take into account the fuzziness and conversion situation of classification boundary and interval nature of evaluation indicators for risk assessment of debris flow hazard. In the model, according to the classification standard, the interval lengths of each indicator were first specified to determine the digital characteristic of connection cloud at different levels. Then the asymmetric connection clouds in finite intervals were simulated to analyze the certainty degree of measured indicator to each evaluation standard. Next, the integrated certainty degree to each grade was calculated with corresponding indicator weight, and the risk grade of debris flow was determined by the maximum integrated certainty degree. Finally, a case study and comparison with other methods were conducted to confirm the reliability and validity of the proposed model. The result shows that this model overcomes the defect of the conventional cloud model and also converts the infinite interval of indicators distribution into finite interval, which makes the evaluation result more reasonable.

  20. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...... model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model...... is to be applied. The suitability of the proposed prediction error-method for predictive control is demonstrated for dual composition control of a simulated binary distillation column....

  1. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    Science.gov (United States)

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  2. The Prospect of using Three-Dimensional Earth Models To Improve Nuclear Explosion Monitoring and Ground Motion Hazard Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Zucca, J J; Walter, W R; Rodgers, A J; Richards, P; Pasyanos, M E; Myers, S C; Lay, T; Harris, D; Antoun, T

    2008-11-19

    The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of Earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D Earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes two specific paths by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas. Seismic monitoring agencies are tasked with detection, location, and characterization of seismic activity in near real time. In the case of nuclear explosion monitoring or seismic hazard, decisions to further investigate a suspect event or to launch disaster relief efforts may rely heavily on real-time analysis and results. Because these are weighty decisions, monitoring agencies are regularly called upon to meticulously document and justify every aspect of their monitoring system. In order to meet this level of scrutiny and maintain operational robustness requirements, only mature technologies are considered for operational monitoring systems, and operational technology necessarily lags

  3. Validation of a 30 m resolution flood hazard model of the conterminous United States

    Science.gov (United States)

    Wing, Oliver E. J.; Bates, Paul D.; Sampson, Christopher C.; Smith, Andrew M.; Johnson, Kris A.; Erickson, Tyler A.

    2017-09-01

    This paper reports the development of a ˜30 m resolution two-dimensional hydrodynamic model of the conterminous U.S. using only publicly available data. The model employs a highly efficient numerical solution of the local inertial form of the shallow water equations which simulates fluvial flooding in catchments down to 50 km2 and pluvial flooding in all catchments. Importantly, we use the U.S. Geological Survey (USGS) National Elevation Dataset to determine topography; the U.S. Army Corps of Engineers National Levee Dataset to explicitly represent known flood defenses; and global regionalized flood frequency analysis to characterize return period flows and rainfalls. We validate these simulations against the complete catalogue of Federal Emergency Management Agency (FEMA) Special Flood Hazard Area (SFHA) maps and detailed local hydraulic models developed by the USGS. Where the FEMA SFHAs are based on high-quality local models, the continental-scale model attains a hit rate of 86%. This correspondence improves in temperate areas and for basins above 400 km2. Against the higher quality USGS data, the average hit rate reaches 92% for the 1 in 100 year flood, and 90% for all flood return periods. Given typical hydraulic modeling uncertainties in the FEMA maps and USGS model outputs (e.g., errors in estimating return period flows), it is probable that the continental-scale model can replicate both to within error. The results show that continental-scale models may now offer sufficient rigor to inform some decision-making needs with dramatically lower cost and greater coverage than approaches based on a patchwork of local studies.

  4. Checking model-data weather hazard occurrence fit in the context of climate change

    OpenAIRE

    Tolosana Delgado, Raimon; Ortego Martínez, María Isabel; Egozcue Rubí, Juan José; Sánchez-Arcilla Conejo, Agustín

    2011-01-01

    In climate change impact studies it is common to run a given response model (from ecosystem changes to wavestorm or landslide occurrence) nested into one of the available long-term Global or Regional Circulation Models (GCM, RCM) reproducing the climate for the XX century or predicting it for the XXI. In this way, it is expected to capture the average behaviour of the studied system to a changing climate forcing: in other words, with such response forecasts, one does not actual...

  5. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  6. Remote sensing and GIS-based landslide hazard analysis and cross-validation using multivariate logistic regression model on three test areas in Malaysia

    Science.gov (United States)

    Pradhan, Biswajeet

    2010-05-01

    This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross

  7. Doubly stochastic models for volcanic hazard assessment at Campi Flegrei caldera

    CERN Document Server

    Bevilacqua, Andrea

    2016-01-01

    This study provides innovative mathematical models for assessing the eruption probability and associated volcanic hazards, and applies them to the Campi Flegrei caldera in Italy. Throughout the book, significant attention is devoted to quantifying the sources of uncertainty affecting the forecast estimates. The Campi Flegrei caldera is certainly one of the world’s highest-risk volcanoes, with more than 70 eruptions over the last 15,000 years, prevalently explosive ones of varying magnitude, intensity and vent location. In the second half of the twentieth century the volcano apparently once again entered a phase of unrest that continues to the present. Hundreds of thousands of people live inside the caldera and over a million more in the nearby city of Naples, making a future eruption of Campi Flegrei an event with potentially catastrophic consequences at the national and European levels.

  8. Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model

    Directory of Open Access Journals (Sweden)

    Ge-Jin Chu

    2014-01-01

    Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  9. Estimation of direct effects for survival data by using the Aalen additive hazards model

    DEFF Research Database (Denmark)

    Martinussen, T.; Vansteelandt, S.; Gerster, M.

    2011-01-01

    We extend the definition of the controlled direct effect of a point exposure on a survival outcome, other than through some given, time-fixed intermediate variable, to the additive hazard scale. We propose two-stage estimators for this effect when the exposure is dichotomous and randomly assigned...... Aalen's additive regression for the event time, given exposure, intermediate variable and confounders. The second stage involves applying Aalen's additive model, given the exposure alone, to a modified stochastic process (i.e. a modification of the observed counting process based on the first......-stage estimates). We give the large sample properties of the estimator proposed and investigate its small sample properties by Monte Carlo simulation. A real data example is provided for illustration....

  10. Conditional spectrum computation incorporating multiple causal earthquakes and ground-motion prediction models

    Science.gov (United States)

    Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas

    2013-01-01

    The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

  11. Coupling Radar Rainfall Estimation and Hydrological Modelling For Flash-flood Hazard Mitigation

    Science.gov (United States)

    Borga, M.; Creutin, J. D.

    Flood risk mitigation is accomplished through managing either or both the hazard and vulnerability. Flood hazard may be reduced through structural measures which alter the frequency of flood levels in the area. The vulnerability of a community to flood loss can be mitigated through changing or regulating land use and through flood warning and effective emergency response. When dealing with flash-flood hazard, it is gener- ally accepted that the most effective way (and in many instances the only affordable in a sustainable perspective) to mitigate the risk is by reducing the vulnerability of the involved communities, in particular by implementing flood warning systems and community self-help programs. However, both the inherent characteristics of the at- mospheric and hydrologic processes involved in flash-flooding and the changing soci- etal needs provide a tremendous challenge to traditional flood forecasting and warning concepts. In fact, the targets of these systems are traditionally localised like urbanised sectors or hydraulic structures. Given the small spatial scale that characterises flash floods and the development of dispersed urbanisation, transportation, green tourism and water sports, human lives and property are exposed to flash flood risk in a scat- tered manner. This must be taken into consideration in flash flood warning strategies and the investigated region should be considered as a whole and every section of the drainage network as a potential target for hydrological warnings. Radar technology offers the potential to provide information describing rain intensities almost contin- uously in time and space. Recent research results indicate that coupling radar infor- mation to distributed hydrologic modelling can provide hydrologic forecasts at all potentially flooded points of a region. Nevertheless, very few flood warning services use radar data more than on a qualitative basis. After a short review of current under- standing in this area, two

  12. Uncertainties in spatially aggregated predictions from a logistic regression model

    NARCIS (Netherlands)

    Horssen, P.W. van; Pebesma, E.J.; Schot, P.P.

    2002-01-01

    This paper presents a method to assess the uncertainty of an ecological spatial prediction model which is based on logistic regression models, using data from the interpolation of explanatory predictor variables. The spatial predictions are presented as approximate 95% prediction intervals. The

  13. First approaches towards modelling glacial hazards in the Mount Cook region of New Zealand's Southern Alps

    Science.gov (United States)

    Allen, S. K.; Schneider, D.; Owens, I. F.

    2009-03-01

    Flood and mass movements originating from glacial environments are particularly devastating in populated mountain regions of the world, but in the remote Mount Cook region of New Zealand's Southern Alps minimal attention has been given to these processes. Glacial environments are characterized by high mass turnover and combined with changing climatic conditions, potential problems and process interactions can evolve rapidly. Remote sensing based terrain mapping, geographic information systems and flow path modelling are integrated here to explore the extent of ice avalanche, debris flow and lake flood hazard potential in the Mount Cook region. Numerous proglacial lakes have formed during recent decades, but well vegetated, low gradient outlet areas suggest catastrophic dam failure and flooding is unlikely. However, potential impacts from incoming mass movements of ice, debris or rock could lead to dam overtopping, particularly where lakes are forming directly beneath steep slopes. Physically based numerical modeling with RAMMS was introduced for local scale analyses of rock avalanche events, and was shown to be a useful tool for establishing accurate flow path dynamics and estimating potential event magnitudes. Potential debris flows originating from steep moraine and talus slopes can reach road and built infrastructure when worst-case runout distances are considered, while potential effects from ice avalanches are limited to walking tracks and alpine huts located in close proximity to initiation zones of steep ice. Further local scale studies of these processes are required, leading towards a full hazard assessment, and changing glacial conditions over coming decades will necessitate ongoing monitoring and reassessment of initiation zones and potential impacts.

  14. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  15. IMPLICATIONS FOR ASYMMETRY, NONPROPORTIONALITY, AND HETEROGENEITY IN BRAND SWITCHING FROM PIECE-WISE EXPONENTIAL MIXTURE HAZARD MODELS

    NARCIS (Netherlands)

    WEDEL, M; KAMAKURA, WA; DESARBO, WS; TERHOFSTEDE, F

    1995-01-01

    The authors develop a class of mixtures of piece-wise exponential hazard models for the analysis of brand switching behavior. The models enable the effects of marketing variables to change nonproportionally over time and can, simultaneously, be used to identify segments among which switching and

  16. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  17. Converting HAZUS capacity curves to seismic hazard-compatible building fragility functions: effect of hysteretic models

    Science.gov (United States)

    Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem

    2008-01-01

    A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.

  18. An investigation on the modelling of kinetics of thermal decomposition of hazardous mercury wastes.

    Science.gov (United States)

    Busto, Yailen; M G Tack, Filip; Peralta, Luis M; Cabrera, Xiomara; Arteaga-Pérez, Luis E

    2013-09-15

    The kinetics of mercury removal from solid wastes generated by chlor-alkali plants were studied. The reaction order and model-free method with an isoconversional approach were used to estimate the kinetic parameters and reaction mechanism that apply to the thermal decomposition of hazardous mercury wastes. As a first approach to the understanding of thermal decomposition for this type of systems (poly-disperse and multi-component), a novel scheme of six reactions was proposed to represent the behaviour of mercury compounds in the solid matrix during the treatment. An integration-optimization algorithm was used in the screening of nine mechanistic models to develop kinetic expressions that best describe the process. The kinetic parameters were calculated by fitting each of these models to the experimental data. It was demonstrated that the D₁-diffusion mechanism appeared to govern the process at 250°C and high residence times, whereas at 450°C a combination of the diffusion mechanism (D₁) and the third order reaction mechanism (F3) fitted the kinetics of the conversions. The developed models can be applied in engineering calculations to dimension the installations and determine the optimal conditions to treat a mercury containing sludge. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Fatigue Modeling via Mammalian Auditory System for Prediction of Noise Induced Hearing Loss

    Directory of Open Access Journals (Sweden)

    Pengfei Sun

    2015-01-01

    Full Text Available Noise induced hearing loss (NIHL remains as a severe health problem worldwide. Existing noise metrics and modeling for evaluation of NIHL are limited on prediction of gradually developing NIHL (GDHL caused by high-level occupational noise. In this study, we proposed two auditory fatigue based models, including equal velocity level (EVL and complex velocity level (CVL, which combine the high-cycle fatigue theory with the mammalian auditory model, to predict GDHL. The mammalian auditory model is introduced by combining the transfer function of the external-middle ear and the triple-path nonlinear (TRNL filter to obtain velocities of basilar membrane (BM in cochlea. The high-cycle fatigue theory is based on the assumption that GDHL can be considered as a process of long-cycle mechanical fatigue failure of organ of Corti. Furthermore, a series of chinchilla experimental data are used to validate the effectiveness of the proposed fatigue models. The regression analysis results show that both proposed fatigue models have high corrections with four hearing loss indices. It indicates that the proposed models can accurately predict hearing loss in chinchilla. Results suggest that the CVL model is more accurate compared to the EVL model on prediction of the auditory risk of exposure to hazardous occupational noise.

  20. Fatigue Modeling via Mammalian Auditory System for Prediction of Noise Induced Hearing Loss.

    Science.gov (United States)

    Sun, Pengfei; Qin, Jun; Campbell, Kathleen

    2015-01-01

    Noise induced hearing loss (NIHL) remains as a severe health problem worldwide. Existing noise metrics and modeling for evaluation of NIHL are limited on prediction of gradually developing NIHL (GDHL) caused by high-level occupational noise. In this study, we proposed two auditory fatigue based models, including equal velocity level (EVL) and complex velocity level (CVL), which combine the high-cycle fatigue theory with the mammalian auditory model, to predict GDHL. The mammalian auditory model is introduced by combining the transfer function of the external-middle ear and the triple-path nonlinear (TRNL) filter to obtain velocities of basilar membrane (BM) in cochlea. The high-cycle fatigue theory is based on the assumption that GDHL can be considered as a process of long-cycle mechanical fatigue failure of organ of Corti. Furthermore, a series of chinchilla experimental data are used to validate the effectiveness of the proposed fatigue models. The regression analysis results show that both proposed fatigue models have high corrections with four hearing loss indices. It indicates that the proposed models can accurately predict hearing loss in chinchilla. Results suggest that the CVL model is more accurate compared to the EVL model on prediction of the auditory risk of exposure to hazardous occupational noise.

  1. Predictability of the Indian Ocean Dipole in the coupled models

    Science.gov (United States)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2017-03-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  2. Prediction of the thermal decomposition of organic peroxides by validated QSPR models.

    Science.gov (United States)

    Prana, Vinca; Rotureau, Patricia; Fayet, Guillaume; André, David; Hub, Serge; Vicot, Patricia; Rao, Li; Adamo, Carlo

    2014-07-15

    Organic peroxides are unstable chemicals which can easily decompose and may lead to explosion. Such a process can be characterized by physico-chemical parameters such as heat and temperature of decomposition, whose determination is crucial to manage related hazards. These thermal stability properties are also required within many regulatory frameworks related to chemicals in order to assess their hazardous properties. In this work, new quantitative structure-property relationships (QSPR) models were developed to predict accurately the thermal stability of organic peroxides from their molecular structure respecting the OECD guidelines for regulatory acceptability of QSPRs. Based on the acquisition of 38 reference experimental data using DSC (differential scanning calorimetry) apparatus in homogenous experimental conditions, multi-linear models were derived for the prediction of the decomposition heat and the onset temperature using different types of molecular descriptors. Models were tested by internal and external validation tests and their applicability domains were defined and analyzed. Being rigorously validated, they presented the best performances in terms of fitting, robustness and predictive power and the descriptors used in these models were linked to the peroxide bond whose breaking represents the main decomposition mechanism of organic peroxides. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Three-dimensional displays for natural hazards analysis, using classified Landsat Thematic Mapper digital data and large-scale digital elevation models

    Science.gov (United States)

    Butler, David R.; Walsh, Stephen J.; Brown, Daniel G.

    1991-01-01

    Methods are described for using Landsat Thematic Mapper digital data and digital elevation models for the display of natural hazard sites in a mountainous region of northwestern Montana, USA. Hazard zones can be easily identified on the three-dimensional images. Proximity of facilities such as highways and building locations to hazard sites can also be easily displayed. A temporal sequence of Landsat TM (or similar) satellite data sets could also be used to display landscape changes associated with dynamic natural hazard processes.

  4. Nonconvex model predictive control for commercial refrigeration

    Science.gov (United States)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  5. Leptogenesis in minimal predictive seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)

    2015-10-15

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.

  6. Developmental prediction model for early alcohol initiation in Dutch adolescents

    NARCIS (Netherlands)

    Geels, L.M.; Vink, J.M.; Beijsterveldt, C.E.M. van; Bartels, M.; Boomsma, D.I.

    2013-01-01

    Objective: Multiple factors predict early alcohol initiation in teenagers. Among these are genetic risk factors, childhood behavioral problems, life events, lifestyle, and family environment. We constructed a developmental prediction model for alcohol initiation below the Dutch legal drinking age

  7. Development of a Probabilistic Tornado Wind Hazard Model for the Continental United States Volume I: Main Report

    Energy Technology Data Exchange (ETDEWEB)

    Boissonnade, A; Hossain, Q; Kimball, J

    2000-07-20

    Since the mid-l980's, assessment of the wind and tornado risks at the Department of Energy (DOE) high and moderate hazard facilities has been based on the straight wind/tornado hazard curves given in UCRL-53526 (Coats, 1985). These curves were developed using a methodology that utilized a model, developed by McDonald, for severe winds at sub-tornado wind speeds and a separate model, developed by Fujita, for tornado wind speeds. For DOE sites not covered in UCRL-53526, wind and tornado hazard assessments are based on the criteria outlined in DOE-STD-1023-95 (DOE, 1996), utilizing the methodology in UCRL-53526; Subsequent to the publication of UCRL53526, in a study sponsored by the Nuclear Regulatory Commission (NRC), the Pacific Northwest Laboratory developed tornado wind hazard curves for the contiguous United States, NUREG/CR-4461 (Ramsdell, 1986). Because of the different modeling assumptions and underlying data used to develop the tornado wind information, the wind speeds at specified exceedance levels, at a given location, based on the methodology in UCRL-53526, are different than those based on the methodology in NUREG/CR-4461. In 1997, Lawrence Livermore National Laboratory (LLNL) was funded by the DOE to review the current methodologies for characterizing tornado wind hazards and to develop a state-of-the-art wind/tornado characterization methodology based on probabilistic hazard assessment techniques and current historical wind data. This report describes the process of developing the methodology and the database of relevant tornado information needed to implement the methodology. It also presents the tornado wind hazard curves obtained from the application of the method to DOE sites throughout the contiguous United States.

  8. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June–September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985–2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  9. A Simple Test for the Absence of Covariate Dependence in Hazard Regression Models

    OpenAIRE

    Bhattacharjee, Arnab

    2004-01-01

    This paper extends commonly used tests for equality of hazard rates in a two-sample or k-sample setup to a situation where the covariate under study is continuous. In other words, we test the hypothesis that the conditional hazard rate is the same for all covariate values, against the omnibus alternative as well as more specific alternatives, when the covariate is continuous. The tests developed are particularly useful for detecting trend in the underlying conditional hazard rates or chang...

  10. GIS and RS-based modelling of potential natural hazard areas in Pehchevo municipality, Republic of Macedonia

    Directory of Open Access Journals (Sweden)

    Milevski Ivica

    2013-01-01

    Full Text Available In this paper, one approach of Geographic Information System (GIS and Remote Sensing (RS assessment of potential natural hazard areas (excess erosion, landslides, flash floods and fires is presented. For that purpose Pehchevo Municipality in the easternmost part of the Republic of Macedonia is selected as a case study area because of high local impact of natural hazards on the environment, social-demographic situation and local economy. First of all, most relevant static factors for each type of natural hazard are selected (topography, land cover, anthropogenic objects and infrastructure. With GIS and satellite imagery, multi-layer calculation is performed based on available traditional equations, clustering or discreditation procedures. In such way suitable relatively “static” natural hazard maps (models are produced. Then, dynamic (mostly climate related factors are included in previous models resulting in appropriate scenarios correlated with different amounts of precipitation, temperature, wind direction etc. Finally, GIS based scenarios are evaluated and tested with field check or very fine resolution Google Earth imagery showing good accuracy. Further development of such GIS models in connection with automatic remote meteorological stations and dynamic satellite imagery (like MODIS will provide on-time warning for coming natural hazard avoiding potential damages or even causalities.

  11. Identifying model pollutants to investigate biodegradation of hazardous XOCs in WWTPs

    Energy Technology Data Exchange (ETDEWEB)

    Press-Kristensen, Kaare; Ledin, Anna; Schmidt, Jens Ejbye; Henze, Mogens [Department of Environment and Resources, Technical University of Denmark Building 115, 2800 Lyngby (Denmark)

    2007-02-01

    Xenobiotic organic compounds (XOCs) in wastewater treatment plant (WWTP) effluents might cause toxic effects in ecosystems. Several investigations have emphasized biodegradation as an important removal mechanism to reduce pollution with XOCs from WWTP effluents. The aim of the study was to design a screening tool to identify and select hazardous model pollutants for the further investigation of biodegradation in WWTPs. The screening tool consists of three criteria: The XOC is present in WWTP effluents, the XOC constitutes an intolerable risk in drinking water or the environment, and the XOC is expected to be biodegradable in WWTPs. The screening tool was tested on bisphenol A (BPA), carbamazepine (CBZ), di(2ethylhexyl)-phthalate (DEHP), 17{beta}-estradiol (E2), estrone (E1), 17{alpha}-ethinyloetradiol (EE2), ibuprofen, naproxen, nonylphenol (NP), and octylphenol (OP). BPA, DEHP, E2, E1, EE2, and NP passed all criteria in the screening tool and were selected as model pollutants. OP did not pass the filter and was rejected as model pollutant. CBZ, ibuprofen, and naproxen were not finally evaluated due to insufficient data. (author)

  12. Mediation Analysis with Survival Outcomes: Accelerated Failure Time Versus Proportional Hazards Models

    Directory of Open Access Journals (Sweden)

    Lois A Gelfand

    2016-03-01

    Full Text Available Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH and fully parametric accelerated failure time (AFT approaches for illustration.Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively under varied data conditions, some including censoring. A simulated data set illustrates the findings.Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome – underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG.Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  13. Local models for rainstorm-induced hazard analysis on Mediterranean river-torrential geomorphological systems

    Directory of Open Access Journals (Sweden)

    N. Diodato

    2004-01-01

    Full Text Available Damaging hydrogeomorphological events are defined as one or more simultaneous phenomena (e.g. accelerated erosions, landslides, flash floods and river floods, occurring in a spatially and temporal random way and triggered by rainfall with different intensity and extent. The storm rainfall values are highly dependent on weather condition and relief. However, the impact of rainstorms in Mediterranean mountain environments depend mainly on climatic fluctuations in the short and long term, especially in rainfall quantity. An algorithm for the characterisation of this impact, called Rainfall Hazard Index (RHI, is developed with a less expensive methodology. In RHI modelling, we assume that the river-torrential system has adapted to the natural hydrological regime, and a sudden fluctuation in this regime, especially those exceeding thresholds for an acceptable range of flexibility, may have disastrous consequences for the mountain environment. RHI integrate two rainfall variables based upon storm depth current and historical data, both of a fixed duration, and a one-dimensionless parameter representative of the degree ecosystem flexibility. The approach was applied to a test site in the Benevento river-torrential landscape, Campania (Southern Italy. So, a database including data from 27 events which have occurred during an 77-year period (1926-2002 was compared with Benevento-station RHI(24h, for a qualitative validation. Trends in RHIx for annual maximum storms of duration 1, 3 and 24h were also examined. Little change is observed at the 3- and 24-h duration of a storm, but a significant increase results in hazard of a short and intense storm (RHIx(1h, in agreement with a reduction in return period for extreme rainfall events.

  14. Nonlinear joint models for individual dynamic prediction of risk of death using Hamiltonian Monte Carlo: application to metastatic prostate cancer

    Directory of Open Access Journals (Sweden)

    Solène Desmée

    2017-07-01

    Full Text Available Abstract Background Joint models of longitudinal and time-to-event data are increasingly used to perform individual dynamic prediction of a risk of event. However the difficulty to perform inference in nonlinear models and to calculate the distribution of individual parameters has long limited this approach to linear mixed-effect models for the longitudinal part. Here we use a Bayesian algorithm and a nonlinear joint model to calculate individual dynamic predictions. We apply this approach to predict the risk of death in metastatic castration-resistant prostate cancer (mCRPC patients with frequent Prostate-Specific Antigen (PSA measurements. Methods A joint model is built using a large population of 400 mCRPC patients where PSA kinetics is described by a biexponential function and the hazard function is a PSA-dependent function. Using Hamiltonian Monte Carlo algorithm implemented in Stan software and the estimated population parameters in this population as priors, the a posteriori distribution of the hazard function is computed for a new patient knowing his PSA measurements until a given landmark time. Time-dependent area under the ROC curve (AUC and Brier score are derived to assess discrimination and calibration of the model predictions, first on 200 simulated patients and then on 196 real patients that are not included to build the model. Results Satisfying coverage probabilities of Monte Carlo prediction intervals are obtained for longitudinal and hazard functions. Individual dynamic predictions provide good predictive performances for landmark times larger than 12 months and horizon time of up to 18 months for both simulated and real data. Conclusions As nonlinear joint models can characterize the kinetics of biomarkers and their link with a time-to-event, this approach could be useful to improve patient’s follow-up and the early detection of most at risk patients.

  15. Time dependent seismic hazard

    Science.gov (United States)

    Polidoro, B.; Iervolino, I.; Chioccarelli, E.; Giorgio, M.

    2012-04-01

    Probabilistic seismic hazard is usually computed trough a homogeneous Poisson process that even though it is a time-independent process it is widely used for its very convenient properties. However, when a single fault is of concern and/or the time scale is different from that of the long term, time-dependent processes are required. In this paper, different time-dependent models are reviewed with working examples. In fact, the Paganica fault (in central Italy) has been considered to compute both the probability of occurrence of at least one event in the lifespan of the structure, as well as the seismic hazard expressed in terms of probability of exceedance of an intensity value in a given time frame causing the collapse of the structure. Several models, well known or novel application to engineering hazard have been considered, limitation and issues in their applications are also discussed. The Brownian Passage Time (BPT) model is based on a stochastic modification of the deterministic stick-slip oscillator model for characteristic earthquakes; i.e., based on the addition of random perturbations (a Gaussian white noise) to the deterministic load path predicted by elastic rebound theory. This model assumes that the load state is at some ground level immediately after an event, increases steadly over time, reaches a failure threshold and relaxes instantaneously back to the ground level. For this model also a variable threshold has been considered to take into account the uncertainty of the threshold value. For the slip-predictable model it is assumed that the stress accumulates at a constant rate starting from some initial stress level. Stress is assumed to accumulate for a random period of time until an earthquake occurs. The size of the earthquake is governed by the stress release and it is a function of the elapsed time since the last event. In the time-predictable model stress buildup occurs at a constant rate until the accumulated stress reaches a threshold

  16. Predictability in models of the atmospheric circulation

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error

  17. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...

  18. Landslide hazard assessment: recent trends and techniques.

    Science.gov (United States)

    Pardeshi, Sudhakar D; Autade, Sumant E; Pardeshi, Suchitra S

    2013-01-01

    Landslide hazard assessment is an important step towards landslide hazard and risk management. There are several methods of Landslide Hazard Zonation (LHZ) viz. heuristic, semi quantitative, quantitative, probabilistic and multi-criteria decision making process. However, no one method is accepted universally for effective assessment of landslide hazards. In recent years, several attempts have been made to apply different methods of LHZ and to compare results in order to find the best suited model. This paper presents the review of researches on landslide hazard mapping published in recent years. The advanced multivariate techniques are proved to be effective in spatial prediction of landslides with high degree of accuracy. Physical process based models also perform well in LHZ mapping even in the areas with poor database. Multi-criteria decision making approach also play significant role in determining relative importance of landslide causative factors in slope instability process. Remote Sensing and Geographical Information System (GIS) are powerful tools to assess landslide hazards and are being used extensively in landslide researches since last decade. Aerial photographs and high resolution satellite data are useful in detection, mapping and monitoring landslide processes. GIS based LHZ models helps not only to map and monitor landslides but also to predict future slope failures. The advancements in Geo-spatial technologies have opened the doors for detailed and accurate assessment of landslide hazards.

  19. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  20. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 3. Hidden Markov Model for quantitative ... A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in ...

  1. Independent test of a model to predict severe acute esophagitis

    Directory of Open Access Journals (Sweden)

    Ellen X. Huang, PhD

    2017-01-01

    Conclusions: The previously published model was validated on an independent data set and determined to be nearly as predictive as the best possible two-parameter logistic model even though it overpredicted risk systematically. A novel, machine learning-based model using a bootstrapping approach showed reasonable predictive power.

  2. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  3. Use of short-term test systems for the prediction of the hazard represented by potential chemical carcinogens

    Energy Technology Data Exchange (ETDEWEB)

    Glass, L.R.; Jones, T.D.; Easterly, C.E.; Walsh, P.J.

    1990-10-01

    It has been hypothesized that results from short-term bioassays will ultimately provide information that will be useful for human health hazard assessment. Historically, the validity of the short-term tests has been assessed using the framework of the epidemiologic/medical screens. In this context, the results of the carcinogen (long-term) bioassay is generally used as the standard. However, this approach is widely recognized as being biased and, because it employs qualitative data, cannot be used to assist in isolating those compounds which may represent a more significant toxicologic hazard than others. In contrast, the goal of this research is to address the problem of evaluating the utility of the short-term tests for hazard assessment using an alternative method of investigation. Chemicals were selected mostly from the list of carcinogens published by the International Agency for Research on Carcinogens (IARC); a few other chemicals commonly recognized as hazardous were included. Tumorigenicity and mutagenicity data on 52 chemicals were obtained from the Registry of Toxic Effects of Chemical Substances (RTECS) and were analyzed using a relative potency approach. The data were evaluated in a format which allowed for a comparison of the ranking of the mutagenic relative potencies of the compounds (as estimated using short-term data) vs. the ranking of the tumorigenic relative potencies (as estimated from the chronic bioassays). Although this was a preliminary investigation, it offers evidence that the short-term tests systems may be of utility in ranking the hazards represented by chemicals which may contribute to increased carcinogenesis in humans as a result of occupational or environmental exposures. 177 refs., 8 tabs.

  4. Combining SLBL routine with landslide-generated tsunami model for a quick hazard assessment tool

    Science.gov (United States)

    Franz, Martin; Rudaz, Benjamin; Jaboyedoff, Michel; Podladchikov, Yury

    2016-04-01

    Regions with steep topography are potentially subject to landslide-induced tsunami, because of the proximity between lakes, rivers, sea shores and potential instabilities. The concentration of the population and infrastructures on the water body shores and downstream valleys could lead to catastrophic consequences. In order to assess comprehensively this phenomenon together with the induced risks, we have developed a tool which allows the construction of the landslide geometry, and which is able to simulate its propagation, the generation and the propagation of the wave and eventually the spread on the shores or the associated downstream flow. The tool is developed in the Matlab© environment, with a graphical user interface (GUI) to select the parameters in a user-friendly manner. The whole process is done in three steps implying different methods. Firstly, the geometry of the sliding mass is constructed using the Sloping Local Base Level (SLBL) concept. Secondly, the propagation of this volume is performed using a model based on viscous flow equations. Finally, the wave generation and its propagation are simulated using the shallow water equations stabilized by the Lax-Friedrichs scheme. The transition between wet and dry bed is performed by the combination of the two latter sets of equations. The intensity map is based on the criterion of flooding in Switzerland provided by the OFEG and results from the multiplication of the velocity and the depth obtained during the simulation. The tool can be used for hazard assessment in the case of well-known landslides, where the SLBL routine can be constrained and checked for realistic construction of the geometrical model. In less-known cases, various failure plane geometries can be automatically built between given range and thus a multi-scenario approach is used. In any case, less-known parameters such as the landslide velocity, its run-out distance, etc. can also be set to vary within given ranges, leading to multi

  5. Hazardous Waste

    Science.gov (United States)

    ... you throw these substances away, they become hazardous waste. Some hazardous wastes come from products in our homes. Our garbage can include such hazardous wastes as old batteries, bug spray cans and paint ...

  6. Predicting the Yield Stress of SCC using Materials Modelling

    DEFF Research Database (Denmark)

    Thrane, Lars Nyholm; Hasholt, Marianne Tange; Pade, Claus

    2005-01-01

    A conceptual model for predicting the Bingham rheological parameter yield stress of SCC has been established. The model used here is inspired by previous work of Oh et al. (1), predicting that the yield stress of concrete relative to the yield stress of paste is a function of the relative thickness...... and distribution were varied between SCC types. The results indicate that yield stress of SCC may be predicted using the model....

  7. Comparative hazard analysis and toxicological modeling of diverse nanomaterials using the embryonic zebrafish (EZ) metric of toxicity.

    Science.gov (United States)

    Harper, Bryan; Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan; Tang, Kaizhi; Heredia-Langner, Alejandro; Lins, Roberto; Harper, Stacey

    The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure-toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure-activity relationships.

  8. Comparative hazard analysis and toxicological modeling of diverse nanomaterials using the embryonic zebrafish (EZ) metric of toxicity

    Energy Technology Data Exchange (ETDEWEB)

    Harper, Bryan [Oregon State University (United States); Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan [Pacific Northwest National Laboratory (United States); Tang, Kaizhi [Intelligent Automation, Inc. (United States); Heredia-Langner, Alejandro [Pacific Northwest National Laboratory (United States); Lins, Roberto [CPqAM, Oswaldo Cruz Foundation, FIOCRUZ-PE (Brazil); Harper, Stacey, E-mail: stacey.harper@oregonstate.edu [Oregon State University (United States)

    2015-06-15

    The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure–toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure–activity relationships.

  9. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    ). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical......In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...

  10. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2017-07-29

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  11. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  12. Interaction prediction between groundwater and quarry extension using discrete choice models and artificial neural networks

    CERN Document Server

    Barthélemy, Johan; Collier, Louise; Hallet, Vincent; Moriamé, Marie; Sartenaer, Annick

    2016-01-01

    Groundwater and rock are intensively exploited in the world. When a quarry is deepened the water table of the exploited geological formation might be reached. A dewatering system is therefore installed so that the quarry activities can continue, possibly impacting the nearby water catchments. In order to recommend an adequate feasibility study before deepening a quarry, we propose two interaction indices between extractive activity and groundwater resources based on hazard and vulnerability parameters used in the assessment of natural hazards. The levels of each index (low, medium, high, very high) correspond to the potential impact of the quarry on the regional hydrogeology. The first index is based on a discrete choice modelling methodology while the second is relying on an artificial neural network. It is shown that these two complementary approaches (the former being probabilistic while the latter fully deterministic) are able to predict accurately the level of interaction. Their use is finally illustrate...

  13. Modeling Short-Term Maximum Individual Exposure from Airborne Hazardous Releases in Urban Environments. Part I: Validation of a Deterministic Model with Field Experimental Data

    Directory of Open Access Journals (Sweden)

    George C. Efthimiou

    2015-06-01

    Full Text Available The release of airborne hazardous substances in the atmosphere has a direct effect on human health as, during the inhalation, an amount of concentration is inserted through the respiratory system into the human body, which can cause serious or even irreparable damage in health. One of the key problems in such cases is the prediction of the maximum individual exposure. Current state of the art methods, which are based on the concentration cumulative distribution function and require the knowledge of the concentration variance and the intermittency factor, have limitations. Recently, authors proposed a deterministic approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. The purpose of the first part of this study is to validate the deterministic approach with the extensive dataset of the MUST (Mock Urban Setting Test field experiment. This dataset includes 81 trials, which practically cover various atmospheric conditions and stability classes and contains in total 4004 non-zero concentration sensor data with time resolutions of 0.01–0.02 s. The results strengthen the usefulness of the deterministic model in predicting short-term maximum individual exposure. Another important output is the estimation of the methodology uncertainty involved.

  14. From Predictive Models to Instructional Policies

    Science.gov (United States)

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  15. Modeling and hazard mapping of complex cascading mass movement processes: the case of glacier lake 513, Carhuaz, Peru

    Science.gov (United States)

    Schneider, Demian; Huggel, Christian; García, Javier; Ludeña, Sebastian; Cochachin, Alejo

    2013-04-01

    The Cordilleras in Peru are especially vulnerable to, and affected by impacts from climate change. Local communities and cities often exist directly within the reach of major hazard potentials such as lake outburst floods (aluviones), mud-/debris flows (huaycos) or large rock-/ice avalanches. They have been repeatedly and strongly affected these regions over the last decades and since the last century, and thousands of people have been killed. One of the most recent events in the Cordillera Blanca occurred on 11 April 2010, when a rock/ice avalanche from the top of Hualcán mountain, NE of the town of Carhuaz impacted the glacier lake 513 (Laguna 513), caused displacement waves and triggered an outburst flood wave. The flow repeatedly transformed from debris flow to hyperconcentrated flow and eventually caused significant damage in Carhuaz. This event was motivation to start early warning and prevention efforts to reduce risks related to ice/rock avalanches and glacier lake outburst floods (GLOF). One of the basic components of an early warning system is the assessment, understanding and communication of relevant hazards and risks. Here we report on the methodology and results of generating GLOF related hazard maps for Carhuaz based on numerical modeling and field work. This exercise required an advanced concept and implementation of different mass movement models. Specifically, numerical models were applied for simulating avalanche flow, avalanche lake impact, displacement wave generation and lake overtopping, and eventually flow propagation of the outburst flood with changing rheology between debris flow and hyperconcentrated flows. We adopted a hazard mapping procedure slightly adjusted adjusted from guidelines developed in Switzerland and in the Andes region. A methodology has thereby been developed to translate results from numerical mass movement modeling into hazard maps. The resulting hazard map was verified and adjusted during field work. This study shows

  16. SCEC Community Modeling Environment (SCEC/CME) - Seismic Hazard Analysis Applications and Infrastructure

    Science.gov (United States)

    Maechling, P. J.; Jordan, T. H.; Kesselman, C.; Moore, R.; Minster, B.; SCEC ITR Collaboration

    2003-12-01

    The Southern California Earthquake Center (SCEC) has formed a Geoscience/IT partnership to develop an advanced information infrastructure for system-level earthquake science in Southern California. This SCEC/ITR partnership comprises SCEC, USC's Information Sciences Institute (ISI), the San Diego Supercomputer Center (SDSC), the Incorporated Institutions for Research in Seismology (IRIS), and the U.S. Geological Survey. This collaboration recently completed the second year in a five-year National Science Foundation (NSF) funded ITR project called the SCEC Community Modeling Environment (SCEC/CME). The goal of the SCEC/CME is to develop seismological applications and information technology (IT) infrastructure to support the development of Seismic Hazard Analysis (SHA) programs and other geophysical simulations. The SHA application programs developed by project collaborators include a Probabilistic Seismic Hazard Analysis system called OpenSHA [Field et al., this meeting]. OpenSHA computational elements that are currently available include a collection of attenuation relationships, and several Earthquake Rupture Forecasts (ERF's). Geophysicists in the collaboration have also developed Anelastic Wave Models (AWMs) using both finite-difference and finite-element approaches. Earthquake simulations using these codes have been run for a variety of earthquake sources. A Rupture Dynamic Model (RDM) has also been developed that couples a rupture dynamics simulation into an anelastic wave model. The collaboration has also developed IT software and hardware infrastructure to support the development, execution, and analysis of SHA programs. To support computationally expensive simulations, we have constructed a grid-based system utilizing Globus software [Kesselman et al., this meeting]. Using the SCEC grid, project collaborators can submit computations from the SCEC/CME servers to High Performance Computers at USC, NPACI and Teragrid High Performance Computing Centers. We have

  17. Real-time multi-model decadal climate predictions

    NARCIS (Netherlands)

    Smith, D.M.; Scaife, A.A.; Boer, G.J.; Caian, M.; Doblas-Reyes, F.J.; Guemas, V.; Hawkins, E.; Hazeleger, W.; Hermanson, L.; Ho, C.K.; Ishii, M.; Kharin, V.; Kimoto, M.; Kirtman, B.; Lean, J.; Matei, D.; Merryfield, W.J.; Muller, W.A.; Pohlmann, H.; Rosati, A.; Wouters, B.; Wyser, K.

    2013-01-01

    We present the first climate prediction of the coming decade made with multiple models, initialized with prior observations. This prediction accrues from an international activity to exchange decadal predictions in near real-time, in order to assess differences and similarities, provide a consensus

  18. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  19. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  20. A Comparison of Hazard Prediction and Assessment Capability (HPAC) Software Dose-Rate Contour Plots to a Sample of Local Fallout Data From Test Detonations in the Continental United States, 1945 - 1962

    National Research Council Canada - National Science Library

    Chancellor, Richard W

    2005-01-01

    A comparison of Hazard Prediction and Assessment Capability (HPAC) software dose-rate contour plots to a sample of local nuclear fallout data from test detonations in the continental United States, 1945 - 1962, is performed...

  1. A model to predict the beginning of the pollen season

    DEFF Research Database (Denmark)

    Toldam-Andersen, Torben Bo

    1991-01-01

    In order to predict the beginning of the pollen season, a model comprising the Utah phenoclirnatography Chill Unit (CU) and ASYMCUR-Growing Degree Hour (GDH) submodels were used to predict the first bloom in Alms, Ulttirrs and Berirln. The model relates environmental temperatures to rest completion...... and bud development. As phenologic parameter 14 years of pollen counts were used. The observed datcs for the beginning of the pollen seasons were defined from the pollen counts and compared with the model prediction. The CU and GDH submodels were used as: 1. A fixed day model, using only the GDH model...... for fruit trees are generally applicable, and give a reasonable description of the growth processes of other trees. This type of model can therefore be of value in predicting the start of the pollen season. The predicted dates were generally within 3-5 days of the observed. Finally the possibility of frost...

  2. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  3. Dynamics Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2017-12-13

    In this article, it is proposed the central nervous system controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the central nervous system (CNS). The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees of freedom. The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes Model Predictive Control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a PD controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  4. Predictive modeling and reducing cyclic variability in autoignition engines

    Science.gov (United States)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  5. DMC Based on Weighting Correction of Predictive Model Errors

    OpenAIRE

    Liu Yumin; Sun Yonghe; XU Fengming; Wang Tao

    2013-01-01

    Ordinary dynamic matrix predictive control (DMC) correct predictive value only using current error, so the correction is not enough. This paper proposes an algorithm. In error correction, it introduces predictive model error and predictive value are weighting corrected to improve the capability of resisting disturbance and regulating speed of the control system. By theoretical analysis and simulation, it has been proven that the algorithm is effective.

  6. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of cowpea yield-water use and weather data were collected.

  7. Tampa Bay Water Clarity Model (TBWCM): As a Predictive Tool

    Science.gov (United States)

    The Tampa Bay Water Clarity Model was developed as a predictive tool for estimating the impact of changing nutrient loads on water clarity as measured by secchi depth. The model combines a physical mixing model with an irradiance model and nutrient cycling model. A 10 segment bi...

  8. Hazard function theory for nonstationary natural hazards

    Science.gov (United States)

    Read, L.; Vogel, R. M.

    2015-12-01

    Studies from the natural hazards literature indicate that many natural processes, including wind speeds, landslides, wildfires, precipitation, streamflow and earthquakes, show evidence of nonstationary behavior such as trends in magnitudes through time. Traditional probabilistic analysis of natural hazards based on partial duration series (PDS) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance is constant through time. Given evidence of trends and the consequent expected growth in devastating impacts from natural hazards across the world, new methods are needed to characterize their probabilistic behavior. The field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (x) with its failure time series (t), enabling computation of corresponding average return periods and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose PDS magnitudes are assumed to follow the widely applied Poisson-GP model. We derive a 2-parameter Generalized Pareto hazard model and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series x, with corresponding failure time series t, should have application to a wide class of natural hazards.

  9. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...

  10. Ocean wave prediction using numerical and neural network models

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...

  11. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of