WorldWideScience

Sample records for hazard models predicting

  1. Multivariate Models for Prediction of Human Skin Sensitization Hazard

    Science.gov (United States)

    Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole

    2016-01-01

    One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324

  2. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Science.gov (United States)

    King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin

    2011-01-01

    Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  3. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Directory of Open Access Journals (Sweden)

    Michael King

    Full Text Available Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL for the development of hazardous drinking in safe drinkers.A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women.69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873. The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51. External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846 and Hedge's g of 0.68 (95% CI 0.57, 0.78.The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  4. Traffic Incident Clearance Time and Arrival Time Prediction Based on Hazard Models

    Directory of Open Access Journals (Sweden)

    Yang beibei Ji

    2014-01-01

    Full Text Available Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an effective input for travel time prediction. In this paper, the hazard based prediction models are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.

  5. The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms

    Science.gov (United States)

    Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie

    2009-01-01

    The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with

  6. Spatial prediction of landslide hazard using discriminant analysis and GIS

    Science.gov (United States)

    Peter V. Gorsevski; Paul Gessler; Randy B. Foltz

    2000-01-01

    Environmental attributes relevant for spatial prediction of landslides triggered by rain and snowmelt events were derived from digital elevation model (DEM). Those data in conjunction with statistics and geographic information system (GIS) provided a detailed basis for spatial prediction of landslide hazard. The spatial prediction of landslide hazard in this paper is...

  7. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    Directory of Open Access Journals (Sweden)

    Ruimin Li

    2014-01-01

    Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  8. A Model for Generating Multi-hazard Scenarios

    Science.gov (United States)

    Lo Jacomo, A.; Han, D.; Champneys, A.

    2017-12-01

    Communities in mountain areas are often subject to risk from multiple hazards, such as earthquakes, landslides, and floods. Each hazard has its own different rate of onset, duration, and return period. Multiple hazards tend to complicate the combined risk due to their interactions. Prioritising interventions for minimising risk in this context is challenging. We developed a probabilistic multi-hazard model to help inform decision making in multi-hazard areas. The model is applied to a case study region in the Sichuan province in China, using information from satellite imagery and in-situ data. The model is not intended as a predictive model, but rather as a tool which takes stakeholder input and can be used to explore plausible hazard scenarios over time. By using a Monte Carlo framework and varrying uncertain parameters for each of the hazards, the model can be used to explore the effect of different mitigation interventions aimed at reducing the disaster risk within an uncertain hazard context.

  9. Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach

    Science.gov (United States)

    Tsai, Bi-Huei; Chang, Chih-Huei

    2009-08-01

    Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.

  10. The Impact Hazard in the Context of Other Natural Hazards and Predictive Science

    Science.gov (United States)

    Chapman, C. R.

    1998-09-01

    The hazard due to impact of asteroids and comets has been recognized as analogous, in some ways, to other infrequent but consequential natural hazards (e.g. floods and earthquakes). Yet, until recently, astronomers and space agencies have felt no need to do what their colleagues and analogous agencies must do in order the assess, quantify, and communicate predictions to those with a practical interest in the predictions (e.g. public officials who must assess the threats, prepare for mitigation, etc.). Recent heightened public interest in the impact hazard, combined with increasing numbers of "near misses" (certain to increase as Spaceguard is implemented) requires that astronomers accept the responsibility to place their predictions and assessments in terms that may be appropriately considered. I will report on preliminary results of a multi-year GSA/NCAR study of "Prediction in the Earth Sciences: Use and Misuse in Policy Making" in which I have represented the impact hazard, while others have treated earthquakes, floods, weather, global climate change, nuclear waste disposal, acid rain, etc. The impact hazard presents an end-member example of a natural hazard, helping those dealing with more prosaic issues to learn from an extreme. On the other hand, I bring to the astronomical community some lessons long adopted in other cases: the need to understand the policy purposes of impact predictions, the need to assess potential societal impacts, the requirements to very carefully assess prediction uncertainties, considerations of potential public uses of the predictions, awareness of ethical considerations (e.g. conflicts of interest) that affect predictions and acceptance of predictions, awareness of appropriate means for publicly communicating predictions, and considerations of the international context (especially for a hazard that knows no national boundaries).

  11. Processing LiDAR Data to Predict Natural Hazards

    Science.gov (United States)

    Fairweather, Ian; Crabtree, Robert; Hager, Stacey

    2008-01-01

    ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.

  12. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches

    International Nuclear Information System (INIS)

    Berge-Thierry, C.

    2007-05-01

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  13. Predicting chemically-induced skin reactions. Part I: QSAR models of skin sensitization and their application to identify potentially hazardous compounds

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Vinicius M. [Laboratory of Molecular Modeling and Design, Faculty of Pharmacy, Federal University of Goiás, Goiânia, GO 74605-220 (Brazil); Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States); Muratov, Eugene [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States); Laboratory of Theoretical Chemistry, A.V. Bogatsky Physical-Chemical Institute NAS of Ukraine, Odessa 65080 (Ukraine); Fourches, Denis [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States); Strickland, Judy; Kleinstreuer, Nicole [ILS/Contractor Supporting the NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM), P.O. Box 13501, Research Triangle Park, NC 27709 (United States); Andrade, Carolina H. [Laboratory of Molecular Modeling and Design, Faculty of Pharmacy, Federal University of Goiás, Goiânia, GO 74605-220 (Brazil); Tropsha, Alexander, E-mail: alex_tropsha@unc.edu [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States)

    2015-04-15

    Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using Random Forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers was 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR Toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the Scorecard database of possible skin or sense organ toxicants as primary candidates for experimental validation. - Highlights: • It was compiled the largest publicly-available skin sensitization dataset. • Predictive QSAR models were developed for skin sensitization. • Developed models have higher prediction accuracy than OECD QSAR Toolbox. • Putative

  14. Predicting chemically-induced skin reactions. Part I: QSAR models of skin sensitization and their application to identify potentially hazardous compounds

    International Nuclear Information System (INIS)

    Alves, Vinicius M.; Muratov, Eugene; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander

    2015-01-01

    Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using Random Forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers was 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR Toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the Scorecard database of possible skin or sense organ toxicants as primary candidates for experimental validation. - Highlights: • It was compiled the largest publicly-available skin sensitization dataset. • Predictive QSAR models were developed for skin sensitization. • Developed models have higher prediction accuracy than OECD QSAR Toolbox. • Putative

  15. Time-predictable model application in probabilistic seismic hazard analysis of faults in Taiwan

    Directory of Open Access Journals (Sweden)

    Yu-Wen Chang

    2017-01-01

    Full Text Available Given the probability distribution function relating to the recurrence interval and the occurrence time of the previous occurrence of a fault, a time-dependent model of a particular fault for seismic hazard assessment was developed that takes into account the active fault rupture cyclic characteristics during a particular lifetime up to the present time. The Gutenberg and Richter (1944 exponential frequency-magnitude relation uses to describe the earthquake recurrence rate for a regional source. It is a reference for developing a composite procedure modelled the occurrence rate for the large earthquake of a fault when the activity information is shortage. The time-dependent model was used to describe the fault characteristic behavior. The seismic hazards contribution from all sources, including both time-dependent and time-independent models, were then added together to obtain the annual total lifetime hazard curves. The effects of time-dependent and time-independent models of fault [e.g., Brownian passage time (BPT and Poisson, respectively] in hazard calculations are also discussed. The proposed fault model result shows that the seismic demands of near fault areas are lower than the current hazard estimation where the time-dependent model was used on those faults, particularly, the elapsed time since the last event of the faults (such as the Chelungpu fault are short.

  16. A high-resolution global flood hazard model

    Science.gov (United States)

    Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.

    2015-09-01

    Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.

  17. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  18. Climate Prediction Center (CPC) U.S. Hazards Outlook

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Climate Prediction Center releases a US Hazards Outlook daily, Monday through Friday. The product highlights regions of anticipated hazardous weather during the...

  19. Modelling multi-hazard hurricane damages on an urbanized coast with a Bayesian Network approach

    Science.gov (United States)

    van Verseveld, H.C.W.; Van Dongeren, A. R.; Plant, Nathaniel G.; Jäger, W.S.; den Heijer, C.

    2015-01-01

    Hurricane flood impacts to residential buildings in coastal zones are caused by a number of hazards, such as inundation, overflow currents, erosion, and wave attack. However, traditional hurricane damage models typically make use of stage-damage functions, where the stage is related to flooding depth only. Moreover, these models are deterministic and do not consider the large amount of uncertainty associated with both the processes themselves and with the predictions. This uncertainty becomes increasingly important when multiple hazards (flooding, wave attack, erosion, etc.) are considered simultaneously. This paper focusses on establishing relationships between observed damage and multiple hazard indicators in order to make better probabilistic predictions. The concept consists of (1) determining Local Hazard Indicators (LHIs) from a hindcasted storm with use of a nearshore morphodynamic model, XBeach, and (2) coupling these LHIs and building characteristics to the observed damages. We chose a Bayesian Network approach in order to make this coupling and used the LHIs ‘Inundation depth’, ‘Flow velocity’, ‘Wave attack’, and ‘Scour depth’ to represent flooding, current, wave impacts, and erosion related hazards.The coupled hazard model was tested against four thousand damage observations from a case site at the Rockaway Peninsula, NY, that was impacted by Hurricane Sandy in late October, 2012. The model was able to accurately distinguish ‘Minor damage’ from all other outcomes 95% of the time and could distinguish areas that were affected by the storm, but not severely damaged, 68% of the time. For the most heavily damaged buildings (‘Major Damage’ and ‘Destroyed’), projections of the expected damage underestimated the observed damage. The model demonstrated that including multiple hazards doubled the prediction skill, with Log-Likelihood Ratio test (a measure of improved accuracy and reduction in uncertainty) scores between 0.02 and 0

  20. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  1. A modeling framework for investment planning in interdependent infrastructures in multi-hazard environments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.; Nozick, Linda Karen; Prince, Michael

    2013-09-01

    Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research & Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorists actions based on assumptions regarding their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.

  2. The New Italian Seismic Hazard Model

    Science.gov (United States)

    Marzocchi, W.; Meletti, C.; Albarello, D.; D'Amico, V.; Luzi, L.; Martinelli, F.; Pace, B.; Pignone, M.; Rovida, A.; Visini, F.

    2017-12-01

    In 2015 the Seismic Hazard Center (Centro Pericolosità Sismica - CPS) of the National Institute of Geophysics and Volcanology was commissioned of coordinating the national scientific community with the aim to elaborate a new reference seismic hazard model, mainly finalized to the update of seismic code. The CPS designed a roadmap for releasing within three years a significantly renewed PSHA model, with regard both to the updated input elements and to the strategies to be followed. The main requirements of the model were discussed in meetings with the experts on earthquake engineering that then will participate to the revision of the building code. The activities were organized in 6 tasks: program coordination, input data, seismicity models, ground motion predictive equations (GMPEs), computation and rendering, testing. The input data task has been selecting the most updated information about seismicity (historical and instrumental), seismogenic faults, and deformation (both from seismicity and geodetic data). The seismicity models have been elaborating in terms of classic source areas, fault sources and gridded seismicity based on different approaches. The GMPEs task has selected the most recent models accounting for their tectonic suitability and forecasting performance. The testing phase has been planned to design statistical procedures to test with the available data the whole seismic hazard models, and single components such as the seismicity models and the GMPEs. In this talk we show some preliminary results, summarize the overall strategy for building the new Italian PSHA model, and discuss in detail important novelties that we put forward. Specifically, we adopt a new formal probabilistic framework to interpret the outcomes of the model and to test it meaningfully; this requires a proper definition and characterization of both aleatory variability and epistemic uncertainty that we accomplish through an ensemble modeling strategy. We use a weighting scheme

  3. Modeling and Prediction of Wildfire Hazard in Southern California, Integration of Models with Imaging Spectrometry

    Science.gov (United States)

    Roberts, Dar A.; Church, Richard; Ustin, Susan L.; Brass, James A. (Technical Monitor)

    2001-01-01

    Large urban wildfires throughout southern California have caused billions of dollars of damage and significant loss of life over the last few decades. Rapid urban growth along the wildland interface, high fuel loads and a potential increase in the frequency of large fires due to climatic change suggest that the problem will worsen in the future. Improved fire spread prediction and reduced uncertainty in assessing fire hazard would be significant, both economically and socially. Current problems in the modeling of fire spread include the role of plant community differences, spatial heterogeneity in fuels and spatio-temporal changes in fuels. In this research, we evaluated the potential of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data for providing improved maps of wildfire fuel properties. Analysis concentrated in two areas of Southern California, the Santa Monica Mountains and Santa Barbara Front Range. Wildfire fuel information can be divided into four basic categories: fuel type, fuel load (live green and woody biomass), fuel moisture and fuel condition (live vs senesced fuels). To map fuel type, AVIRIS data were used to map vegetation species using Multiple Endmember Spectral Mixture Analysis (MESMA) and Binary Decision Trees. Green live biomass and canopy moisture were mapped using AVIRIS through analysis of the 980 nm liquid water absorption feature and compared to alternate measures of moisture and field measurements. Woody biomass was mapped using L and P band cross polarimetric data acquired in 1998 and 1999. Fuel condition was mapped using spectral mixture analysis to map green vegetation (green leaves), nonphotosynthetic vegetation (NPV; stems, wood and litter), shade and soil. Summaries describing the potential of hyperspectral and SAR data for fuel mapping are provided by Roberts et al. and Dennison et al. To utilize remotely sensed data to assess fire hazard, fuel-type maps were translated

  4. Multivariate Models for Prediction of Human Skin Sensitization ...

    Science.gov (United States)

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine

  5. Predicting chemically-induced skin reactions. Part I: QSAR models of skin sensitization and their application to identify potentially hazardous compounds

    Science.gov (United States)

    Alves, Vinicius M.; Muratov, Eugene; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander

    2015-01-01

    Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using random forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers were 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the ScoreCard database of possible skin or sense organ toxicants as primary candidates for experimental validation. PMID:25560674

  6. Use of the Hazard Prediction and Assessment Capability (HPAC) at the Savannah River Site

    International Nuclear Information System (INIS)

    BUCKLEY, ROBERT

    2004-01-01

    This report provides background information on the implementation of the Hazard Prediction and Assessment Capability (HPAC) software package at the Savannah River Site for atmospheric modeling. Developed by the Defense Threat Reduction Agency (DTRA), HPAC is actually a suite of models that allows for various modes of release of radiological, chemical and biological agents, generates interpolated meteorological data fields based on inputted meteorology, and transports the material using a tested transport and diffusion model. A discussion of meteorological data availability for direct use by HPAC is given, and is important for applications to emergency response. An evaluation of the data input methodology for a release from SRS is examined, as well as an application to the World Trade Center attacks in New York City in September 2001. Benefits of the newer versions of HPAC now in use are then discussed. These include access to more meteorological data sources and improved graphical cap abilities. The HPAC software package is another tool to help the Atmospheric Technologies Group (ATG) provide atmospheric transport and dispersion predictions in the event of hazardous atmospheric releases

  7. Application-driven ground motion prediction equation for seismic hazard assessments in non-cratonic moderate-seismicity areas

    Science.gov (United States)

    Bindi, D.; Cotton, F.; Kotha, S. R.; Bosse, C.; Stromeyer, D.; Grünthal, G.

    2017-09-01

    We present a ground motion prediction equation (GMPE) for probabilistic seismic hazard assessments (PSHA) in low-to-moderate seismicity areas, such as Germany. Starting from the NGA-West2 flat-file (Ancheta et al. in Earthquake Spectra 30:989-1005, 2014), we develop a model tailored to the hazard application in terms of data selection and implemented functional form. In light of such hazard application, the GMPE is derived for hypocentral distance (along with the Joyner-Boore one), selecting recordings at sites with vs30 ≥ 360 m/s, distances within 300 km, and magnitudes in the range 3 to 8 (being 7.4 the maximum magnitude for the PSHA in the target area). Moreover, the complexity of the considered functional form is reflecting the availability of information in the target area. The median predictions are compared with those from the NGA-West2 models and with one recent European model, using the Sammon's map constructed for different scenarios. Despite the simplification in the functional form, the assessed epistemic uncertainty in the GMPE median is of the order of those affecting the NGA-West2 models for the magnitude range of interest of the hazard application. On the other hand, the simplification of the functional form led to an increment of the apparent aleatory variability. In conclusion, the GMPE developed in this study is tailored to the needs for applications in low-to-moderate seismic areas and for short return periods (e.g., 475 years); its application in studies where the hazard is involving magnitudes above 7.4 and for long return periods is not advised.

  8. A New Seismic Hazard Model for Mainland China

    Science.gov (United States)

    Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.

    2017-12-01

    We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.

  9. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  10. A ¤flexible additive multiplicative hazard model

    DEFF Research Database (Denmark)

    Martinussen, T.; Scheike, T. H.

    2002-01-01

    Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect......Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect...

  11. Predicting the onset of hazardous alcohol drinking in primary care: development and validation of a simple risk algorithm.

    Science.gov (United States)

    Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia

    2017-04-01

    Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The 'predictAL-10' risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the 'predictAL-9'), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. © British Journal of General Practice 2017.

  12. Evaluation of model-predicted hazardous air pollutants (HAPs) near a mid-sized U.S. airport

    Science.gov (United States)

    Vennam, Lakshmi Pradeepa; Vizuete, William; Arunachalam, Saravanan

    2015-10-01

    Accurate modeling of aircraft-emitted pollutants in the vicinity of airports is essential to study the impact on local air quality and to answer policy and health-impact related issues. To quantify air quality impacts of airport-related hazardous air pollutants (HAPs), we carried out a fine-scale (4 × 4 km horizontal resolution) Community Multiscale Air Quality model (CMAQ) model simulation at the T.F. Green airport in Providence (PVD), Rhode Island. We considered temporally and spatially resolved aircraft emissions from the new Aviation Environmental Design Tool (AEDT). These model predictions were then evaluated with observations from a field campaign focused on assessing HAPs near the PVD airport. The annual normalized mean error (NME) was in the range of 36-70% normalized mean error for all HAPs except for acrolein (>70%). The addition of highly resolved aircraft emissions showed only marginally incremental improvements in performance (1-2% decrease in NME) of some HAPs (formaldehyde, xylene). When compared to a coarser 36 × 36 km grid resolution, the 4 × 4 km grid resolution did improve performance by up to 5-20% NME for formaldehyde and acetaldehyde. The change in power setting (from traditional International Civil Aviation Organization (ICAO) 7% to observation studies based 4%) doubled the aircraft idling emissions of HAPs, but led to only a 2% decrease in NME. Overall modeled aircraft-attributable contributions are in the range of 0.5-28% near a mid-sized airport grid-cell with maximum impacts seen only within 4-16 km from the airport grid-cell. Comparison of CMAQ predictions with HAP estimates from EPA's National Air Toxics Assessment (NATA) did show similar annual mean concentrations and equally poor performance. Current estimates of HAPs for PVD are a challenge for modeling systems and refinements in our ability to simulate aircraft emissions have made only incremental improvements. Even with unrealistic increases in HAPs aviation emissions the model

  13. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  14. Limitations of Cox Proportional Hazards Analysis in Mortality Prediction of Patients with Acute Coronary Syndrome

    Directory of Open Access Journals (Sweden)

    Babińska Magdalena

    2015-12-01

    Full Text Available The aim of this study was to evaluate the possibility of incorrect assessment of mortality risk factors in a group of patients affected by acute coronary syndrome, due to the lack of hazard proportionality in the Cox regression model. One hundred and fifty consecutive patients with acute coronary syndrome (ACS and no age limit were enrolled. Univariable and multivariable Cox proportional hazard analyses were performed. The proportional hazard assumptions were verified using Schoenfeld residuals, χ2 test and rank correlation coefficient t between residuals and time. In the total group of 150 patients, 33 (22.0% deaths from any cause were registered in the follow-up time period of 64 months. The non-survivors were significantly older and had increased prevalence of diabetes and erythrocyturia, longer history of coronary artery disease, higher concentrations of serum creatinine, cystatin C, uric acid, glucose, C-reactive protein (CRP, homocysteine and B-type natriuretic peptide (NT-proBNP, and lower concentrations of serum sodium. No significant differences in echocardiography parameters were observed between groups. The following factors were risk of death factors and fulfilled the proportional hazard assumption in the univariable model: smoking, occurrence of diabetes and anaemia, duration of coronary artery disease, and abnormal serum concentrations of uric acid, sodium, homocysteine, cystatin C and NT-proBNP, while in the multivariable model, the risk of death factors were: smoking and elevated concentrations of homocysteine and NT-proBNP. The study has demonstrated that violation of the proportional hazard assumption in the Cox regression model may lead to creating a false model that does not include only time-independent predictive factors.

  15. Recent advances in prediction of emission of hazardous air pollutants from coal-fired power plants

    International Nuclear Information System (INIS)

    Senior, C.L.; Helble, J.J.; Sarofim, A.F.

    2000-01-01

    Coal-fired power plants are a primary source of mercury discharge into the atmosphere along with fine particulates containing arsenic, selenium, cadmium, and other hazardous air pollutants. Information regarding the speciation of these toxic metals is necessary to accurately predict their atmospheric transport and fate in the environment. New predictive tools have been developed to allow utilities to better estimate the emissions of toxic metals from coal-fired power plants. These prediction equations are based on fundamental physics and chemistry and can be applied to a wide variety of fuel types and combustion conditions. The models have significantly improved the ability to predict the emissions of air toxic metals in fine particulate and gas-phase mercury. In this study, the models were successfully tested using measured mercury speciation and mass balance information collected from coal-fired power plants

  16. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  17. Hazard Forecasting by MRI: A Prediction Algorithm of the First Kind

    Science.gov (United States)

    Lomnitz, C.

    2003-12-01

    Seismic gaps do not tell us when and where the next earthquake is due. We present new results on limited earthquake hazard prediction at plate boundaries. Our algorithm quantifies earthquake hazard in seismic gaps. The prediction window found for M7 is on the order of 50 km by 20 years (Lomnitz, 1996a). The earth is unstable with respect to small perturbations of the initial conditions. A prediction of the first kind is an estimate of the time evolution of a complex system with fixed boundary conditions in response to changes in the initial state, for example, weather prediction (Edward Lorenz, 1975; Hasselmann, 2002). We use the catalog of large world earthquakes as a proxy for the initial conditions. The MRI algorithm simulates the response of the system to updating the catalog. After a local stress transient dP the entropy decays as (grad dP)2 due to transient flows directed toward the epicenter. Healing is the thermodynamic process which resets the state of stress. It proceeds as a power law from the rupture boundary inwards, as in a wound. The half-life of a rupture is defined as the healing time which shrinks the size of a scar by half. Healed segments of plate boundary can rupture again. From observations in Chile, Mexico and Japan we find that the half-life of a seismic rupture is about 20 years, in agreement with seismic gap observations. The moment ratio MR is defined as the contrast between the cumulative regional moment release and the local moment deficiency at time t along the plate boundary. The procedure is called MRI. The findings: (1) MRI works; (2) major earthquakes match prominent peaks in the MRI graph; (3) important events (Central Chile 1985; Mexico 1985; Kobe 1995) match MRI peaks which began to emerge 10 to 20 years before the earthquake; (4) The emergence of peaks in MRI depends on earlier ruptures that occurred, not adjacent to but at 10 to 20 fault lengths from the epicentral region, in agreement with triggering effects. The hazard

  18. Agent-based simulation for human-induced hazard analysis.

    Science.gov (United States)

    Bulleit, William M; Drewek, Matthew W

    2011-02-01

    Terrorism could be treated as a hazard for design purposes. For instance, the terrorist hazard could be analyzed in a manner similar to the way that seismic hazard is handled. No matter how terrorism is dealt with in the design of systems, the need for predictions of the frequency and magnitude of the hazard will be required. And, if the human-induced hazard is to be designed for in a manner analogous to natural hazards, then the predictions should be probabilistic in nature. The model described in this article is a prototype model that used agent-based modeling (ABM) to analyze terrorist attacks. The basic approach in this article of using ABM to model human-induced hazards has been preliminarily validated in the sense that the attack magnitudes seem to be power-law distributed and attacks occur mostly in regions where high levels of wealth pass through, such as transit routes and markets. The model developed in this study indicates that ABM is a viable approach to modeling socioeconomic-based infrastructure systems for engineering design to deal with human-induced hazards. © 2010 Society for Risk Analysis.

  19. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  20. Visual motion perception predicts driving hazard perception ability.

    Science.gov (United States)

    Lacherez, Philippe; Au, Sandra; Wood, Joanne M

    2014-02-01

    To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  1. Seismic Hazard Assessment in Site Evaluation for Nuclear Installations: Ground Motion Prediction Equations and Site Response

    International Nuclear Information System (INIS)

    2016-07-01

    The objective of this publication is to provide the state-of-the-art practice and detailed technical elements related to ground motion evaluation by ground motion prediction equations (GMPEs) and site response in the context of seismic hazard assessments as recommended in IAEA Safety Standards Series No. SSG-9, Seismic Hazards in Site Evaluation for Nuclear Installations. The publication includes the basics of GMPEs, ground motion simulation, selection and adjustment of GMPEs, site characterization, and modelling of site response in order to improve seismic hazard assessment. The text aims at delineating the most important aspects of these topics (including current practices, criticalities and open problems) within a coherent framework. In particular, attention has been devoted to filling conceptual gaps. It is written as a reference text for trained users who are responsible for planning preparatory seismic hazard analyses for siting of all nuclear installations and/or providing constraints for anti-seismic design and retrofitting of existing structures

  2. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  3. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches; Modelisation de la rupture sismique, prediction du mouvement fort, et evaluation de l'alea sismique: approches fondamentale et appliquee

    Energy Technology Data Exchange (ETDEWEB)

    Berge-Thierry, C

    2007-05-15

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  4. Further Results on Dynamic Additive Hazard Rate Model

    Directory of Open Access Journals (Sweden)

    Zhengcheng Zhang

    2014-01-01

    Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.

  5. LAV@HAZARD: a Web-GIS Framework for Real-Time Forecasting of Lava Flow Hazards

    Science.gov (United States)

    Del Negro, C.; Bilotta, G.; Cappello, A.; Ganci, G.; Herault, A.

    2014-12-01

    Crucial to lava flow hazard assessment is the development of tools for real-time prediction of flow paths, flow advance rates, and final flow lengths. Accurate prediction of flow paths and advance rates requires not only rapid assessment of eruption conditions (especially effusion rate) but also improved models of lava flow emplacement. Here we present the LAV@HAZARD web-GIS framework, which combines spaceborne remote sensing techniques and numerical simulations for real-time forecasting of lava flow hazards. By using satellite-derived discharge rates to drive a lava flow emplacement model, LAV@HAZARD allows timely definition of parameters and maps essential for hazard assessment, including the propagation time of lava flows and the maximum run-out distance. We take advantage of the flexibility of the HOTSAT thermal monitoring system to process satellite images coming from sensors with different spatial, temporal and spectral resolutions. HOTSAT was designed to ingest infrared satellite data acquired by the MODIS and SEVIRI sensors to output hot spot location, lava thermal flux and discharge rate. We use LAV@HAZARD to merge this output with the MAGFLOW physics-based model to simulate lava flow paths and to update, in a timely manner, flow simulations. Thus, any significant changes in lava discharge rate are included in the predictions. A significant benefit in terms of computational speed was obtained thanks to the parallel implementation of MAGFLOW on graphic processing units (GPUs). All this useful information has been gathered into the LAV@HAZARD platform which, due to the high degree of interactivity, allows generation of easily readable maps and a fast way to explore alternative scenarios. We will describe and demonstrate the operation of this framework using a variety of case studies pertaining to Mt Etna, Sicily. Although this study was conducted on Mt Etna, the approach used is designed to be applicable to other volcanic areas around the world.

  6. Building a risk-targeted regional seismic hazard model for South-East Asia

    Science.gov (United States)

    Woessner, J.; Nyst, M.; Seyhan, E.

    2015-12-01

    The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.

  7. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches; Modelisation de la rupture sismique, prediction du mouvement fort, et evaluation de l'alea sismique: approches fondamentale et appliquee

    Energy Technology Data Exchange (ETDEWEB)

    Berge-Thierry, C

    2007-05-15

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  8. An updated PREDICT breast cancer prognostication and treatment benefit prediction model with independent validation.

    Science.gov (United States)

    Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P

    2017-05-22

    PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age

  9. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    Science.gov (United States)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including

  10. The response of local residents to a chemical hazard warning: Prediction of behavioral intentions in France, Greece and the Netherlands

    NARCIS (Netherlands)

    Wiegman, O.; Komilis, Egli; Cadet, Bernard; Boer, Hendrik; Gutteling, Jan M.

    1993-01-01

    In this study Greek, French and Dutch residents of a hazardous chemical complex were confronted with a simulated warning scenario for an industrial accident and intended functional and dysfunctional behaviours were measured. Intended functional behaviours were poorly predicted by our model, while

  11. Identification, prediction, and mitigation of sinkhole hazards in evaporite karst areas

    Science.gov (United States)

    Gutierrez, F.; Cooper, A.H.; Johnson, K.S.

    2008-01-01

    Sinkholes usually have a higher probability of occurrence and a greater genetic diversity in evaporite terrains than in carbonate karst areas. This is because evaporites have a higher solubility and, commonly, a lower mechanical strength. Subsidence damage resulting from evaporite dissolution generates substantial losses throughout the world, but the causes are only well understood in a few areas. To deal with these hazards, a phased approach is needed for sinkhole identification, investigation, prediction, and mitigation. Identification techniques include field surveys and geomorphological mapping combined with accounts from local people and historical sources. Detailed sinkhole maps can be constructed from sequential historical maps, recent topographical maps, and digital elevation models (DEMs) complemented with building-damage surveying, remote sensing, and high-resolution geodetic surveys. On a more detailed level, information from exposed paleosubsidence features (paleokarst), speleological explorations, geophysical investigations, trenching, dating techniques, and boreholes may help in investigating dissolution and subsidence features. Information on the hydrogeological pathways including caves, springs, and swallow holes are particularly important especially when corroborated by tracer tests. These diverse data sources make a valuable database-the karst inventory. From this dataset, sinkhole susceptibility zonations (relative probability) may be produced based on the spatial distribution of the features and good knowledge of the local geology. Sinkhole distribution can be investigated by spatial distribution analysis techniques including studies of preferential elongation, alignment, and nearest neighbor analysis. More objective susceptibility models may be obtained by analyzing the statistical relationships between the known sinkholes and the conditioning factors. Chronological information on sinkhole formation is required to estimate the probability of

  12. Earthquake Prediction Research In Iceland, Applications For Hazard Assessments and Warnings

    Science.gov (United States)

    Stefansson, R.

    Earthquake prediction research in Iceland, applications for hazard assessments and warnings. The first multinational earthquake prediction research project in Iceland was the Eu- ropean Council encouraged SIL project of the Nordic countries, 1988-1995. The path selected for this research was to study the physics of crustal processes leading to earth- quakes. It was considered that small earthquakes, down to magnitude zero, were the most significant for this purpose, because of the detailed information which they pro- vide both in time and space. The test area for the project was the earthquake prone region of the South Iceland seismic zone (SISZ). The PRENLAB and PRENLAB-2 projects, 1996-2000 supported by the European Union were a direct continuation of the SIL project, but with a more multidisciplinary approach. PRENLAB stands for "Earthquake prediction research in a natural labo- ratory". The basic objective was to advance our understanding in general on where, when and how dangerous NH10earthquake motion might strike. Methods were devel- oped to study crustal processes and conditions, by microearthquake information, by continuous GPS, InSAR, theoretical modelling, fault mapping and paleoseismology. New algorithms were developed for short term warnings. A very useful short term warning was issued twice in the year 2000, one for a sudden start of an eruption in Volcano Hekla February 26, and the other 25 hours before a second (in a sequence of two) magnitude 6.6 (Ms) earthquake in the South Iceland seismic zone in June 21, with the correct location and approximate size. A formal short term warning, although not going to the public, was also issued before a magnitude 5 earthquake in November 1998. In the presentation it will be shortly described what these warnings were based on. A general hazard assessmnets was presented in scientific journals 10-15 years ago assessing within a few kilometers the location of the faults of the two 2000 earthquakes and suggesting

  13. Levels of and changes in life satisfaction predict mortality hazards: Disentangling the role of physical health, perceived control, and social orientation.

    Science.gov (United States)

    Hülür, Gizem; Heckhausen, Jutta; Hoppmann, Christiane A; Infurna, Frank J; Wagner, Gert G; Ram, Nilam; Gerstorf, Denis

    2017-09-01

    It is well documented that well-being typically evinces precipitous decrements at the end of life. However, research has primarily taken a postdictive approach by knowing the outcome (date of death) and aligning, in retrospect, how well-being has changed for people with documented death events. In the present study, we made use of a predictive approach by examining whether and how levels of and changes in life satisfaction prospectively predict mortality hazards and delineate the role of contributing factors, including health, perceived control, and social orientation. To do so, we applied shared parameter growth-survival models to 20-year longitudinal data from 10,597 participants (n = 1,560 [15%] deceased; age at baseline: M = 44 years, SD = 17, range = 18-98 years) from the national German Socio-Economic Panel Study. Our findings showed that lower levels and steeper declines of life satisfaction each uniquely predicted higher mortality risks. Results also revealed moderating effects of age and perceived control: Life satisfaction levels and changes had stronger predictive effects for mortality hazards among older adults. Perceived control was associated with lower mortality hazards; however, this effect was diminished for those who experienced accelerated life satisfaction decline. Variance decomposition suggests that predictive effects of life satisfaction trajectories were partially unique (3%-6%) and partially shared with physical health, perceived control, and social orientation (17%-19%). Our discussion focuses on the strengths and challenges of a predictive approach to link developmental changes (in life satisfaction) to mortality hazards, and considers implications of our findings for healthy aging. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Spatial prediction and validation of zoonotic hazard through micro-habitat properties: where does Puumala hantavirus hole - up?

    Science.gov (United States)

    Khalil, Hussein; Olsson, Gert; Magnusson, Magnus; Evander, Magnus; Hörnfeldt, Birger; Ecke, Frauke

    2017-07-26

    To predict the risk of infectious diseases originating in wildlife, it is important to identify habitats that allow the co-occurrence of pathogens and their hosts. Puumala hantavirus (PUUV) is a directly-transmitted RNA virus that causes hemorrhagic fever in humans, and is carried and transmitted by the bank vole (Myodes glareolus). In northern Sweden, bank voles undergo 3-4 year population cycles, during which their spatial distribution varies greatly. We used boosted regression trees; a technique inspired by machine learning, on a 10 - year time-series (fall 2003-2013) to develop a spatial predictive model assessing seasonal PUUV hazard using micro-habitat variables in a landscape heavily modified by forestry. We validated the models in an independent study area approx. 200 km away by predicting seasonal presence of infected bank voles in a five-year-period (2007-2010 and 2015). The distribution of PUUV-infected voles varied seasonally and inter-annually. In spring, micro-habitat variables related to cover and food availability in forests predicted both bank vole and infected bank vole presence. In fall, the presence of PUUV-infected voles was generally restricted to spruce forests where cover was abundant, despite the broad landscape distribution of bank voles in general. We hypothesize that the discrepancy in distribution between infected and uninfected hosts in fall, was related to higher survival of PUUV and/or PUUV-infected voles in the environment, especially where cover is plentiful. Moist and mesic old spruce forests, with abundant cover such as large holes and bilberry shrubs, also providing food, were most likely to harbor infected bank voles. The models developed using long-term and spatially extensive data can be extrapolated to other areas in northern Fennoscandia. To predict the hazard of directly transmitted zoonoses in areas with unknown risk status, models based on micro-habitat variables and developed through machine learning techniques in

  15. User's manual of a computer code for seismic hazard evaluation for assessing the threat to a facility by fault model. SHEAT-FM

    International Nuclear Information System (INIS)

    Sugino, Hideharu; Onizawa, Kunio; Suzuki, Masahide

    2005-09-01

    To establish the reliability evaluation method for aged structural component, we developed a probabilistic seismic hazard evaluation code SHEAT-FM (Seismic Hazard Evaluation for Assessing the Threat to a facility site - Fault Model) using a seismic motion prediction method based on fault model. In order to improve the seismic hazard evaluation, this code takes the latest knowledge in the field of earthquake engineering into account. For example, the code involves a group delay time of observed records and an update process model of active fault. This report describes the user's guide of SHEAT-FM, including the outline of the seismic hazard evaluation, specification of input data, sample problem for a model site, system information and execution method. (author)

  16. Predicting coastal cliff erosion using a Bayesian probabilistic model

    Science.gov (United States)

    Hapke, Cheryl J.; Plant, Nathaniel G.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.

  17. About Using Predictive Models and Tools To Assess Chemicals under TSCA

    Science.gov (United States)

    As part of EPA's effort to promote chemical safety, OPPT provides public access to predictive models and tools which can help inform the public on the hazards and risks of substances and improve chemical management decisions.

  18. An optimization model for transportation of hazardous materials

    International Nuclear Information System (INIS)

    Seyed-Hosseini, M.; Kheirkhah, A. S.

    2005-01-01

    In this paper, the optimal routing problem for transportation of hazardous materials is studied. Routing for the purpose of reducing the risk of transportation of hazardous materials has been studied and formulated by many researcher and several routing models have been presented up to now. These models can be classified into the categories: the models for routing a single movement and the models for routing multiple movements. In this paper, according to the current rules and regulations of road transportations of hazardous materials in Iran, a routing problem is designed. In this problem, the routs for several independent movements are simultaneously determined. To examine the model, the problem the transportations of two different dangerous materials in the road network of Mazandaran province in the north of Iran is formulated and solved by applying Integer programming model

  19. Wave climate change, coastline response and hazard prediction in New South Wales, Australia

    International Nuclear Information System (INIS)

    Goodwin, Ian D.; Verdon, Danielle; Cowell, Peter

    2007-01-01

    Full text: Full text: Considerable research effort has been directed towards understanding and the gross prediction of shoreline response to sea level rise (eg. Cowell ef a/. 2003a, b). In contrast, synoptic prediction of changes in the planform configuration of shorelines in response to changes in wind and wave climates over many decades has been limited by the lack of geohistorical data on shoreline alignment evolution and long time series of wave climate. This paper presents new data sets on monthly mean wave direction variability based on: a. Waverider buoy data; b. a reconstruction of monthly mid-shelf wave direction, 1877 to 2002 AD from historical MSLP data (Goodwin 2005); and c. a multi-decadal reconstruction of wave direction, in association with the Interdecadal Pacific Oscillation and the Southern Annular Mode of climate variability, covering the past millennium. A model of coastline response to the wave climate variability is presented for northern and central New South Wales (NSW) for decadal to multi-decadal time scales, and is based on instrumental and geohistorical data. The sensitivity of the coastline position and alignment, and beach state to mean and extreme wave climate changes is demonstrated (e.g. Goodwin et al. 2006). State changes in geometric shoreline alignment rotation, sand volume (progradation/recession) for NSW and mean wave direction, are shown to be in agreement with the low-frequency change in Pacific-wide climate. Synoptic typing of climate patterns using Self Organised Mapping methods is used to downscale CSIRO GCM output for this century. The synoptic types are correlated to instrumental wave climate data and coastal behaviour. The shifts in downscaled synoptic types for 2030 and 2070 AD are then used as the basis for predicting mean wave climate changes, coastal behaviour and hazards along the NSW coastline. The associated coastal hazards relate to the definition of coastal land loss through rising sea levels and shoreline

  20. Automated economic analysis model for hazardous waste minimization

    International Nuclear Information System (INIS)

    Dharmavaram, S.; Mount, J.B.; Donahue, B.A.

    1990-01-01

    The US Army has established a policy of achieving a 50 percent reduction in hazardous waste generation by the end of 1992. To assist the Army in reaching this goal, the Environmental Division of the US Army Construction Engineering Research Laboratory (USACERL) designed the Economic Analysis Model for Hazardous Waste Minimization (EAHWM). The EAHWM was designed to allow the user to evaluate the life cycle costs for various techniques used in hazardous waste minimization and to compare them to the life cycle costs of current operating practices. The program was developed in C language on an IBM compatible PC and is consistent with other pertinent models for performing economic analyses. The potential hierarchical minimization categories used in EAHWM include source reduction, recovery and/or reuse, and treatment. Although treatment is no longer an acceptable minimization option, its use is widespread and has therefore been addressed in the model. The model allows for economic analysis for minimization of the Army's six most important hazardous waste streams. These include, solvents, paint stripping wastes, metal plating wastes, industrial waste-sludges, used oils, and batteries and battery electrolytes. The EAHWM also includes a general application which can be used to calculate and compare the life cycle costs for minimization alternatives of any waste stream, hazardous or non-hazardous. The EAHWM has been fully tested and implemented in more than 60 Army installations in the United States

  1. Hazard identification based on plant functional modelling

    International Nuclear Information System (INIS)

    Rasmussen, B.; Whetton, C.

    1993-10-01

    A major objective of the present work is to provide means for representing a process plant as a socio-technical system, so as to allow hazard identification at a high level. The method includes technical, human and organisational aspects and is intended to be used for plant level hazard identification so as to identify critical areas and the need for further analysis using existing methods. The first part of the method is the preparation of a plant functional model where a set of plant functions link together hardware, software, operations, work organisation and other safety related aspects of the plant. The basic principle of the functional modelling is that any aspect of the plant can be represented by an object (in the sense that this term is used in computer science) based upon an Intent (or goal); associated with each Intent are Methods, by which the Intent is realized, and Constraints, which limit the Intent. The Methods and Constraints can themselves be treated as objects and decomposed into lower-level Intents (hence the procedure is known as functional decomposition) so giving rise to a hierarchical, object-oriented structure. The plant level hazard identification is carried out on the plant functional model using the Concept Hazard Analysis method. In this, the user will be supported by checklists and keywords and the analysis is structured by pre-defined worksheets. The preparation of the plant functional model and the performance of the hazard identification can be carried out manually or with computer support. (au) (4 tabs., 10 ills., 7 refs.)

  2. Modeling lahar behavior and hazards

    Science.gov (United States)

    Manville, Vernon; Major, Jon J.; Fagents, Sarah A.

    2013-01-01

    Lahars are highly mobile mixtures of water and sediment of volcanic origin that are capable of traveling tens to > 100 km at speeds exceeding tens of km hr-1. Such flows are among the most serious ground-based hazards at many volcanoes because of their sudden onset, rapid advance rates, long runout distances, high energy, ability to transport large volumes of material, and tendency to flow along existing river channels where populations and infrastructure are commonly concentrated. They can grow in volume and peak discharge through erosion and incorporation of external sediment and/or water, inundate broad areas, and leave deposits many meters thick. Furthermore, lahars can recur for many years to decades after an initial volcanic eruption, as fresh pyroclastic material is eroded and redeposited during rainfall events, resulting in a spatially and temporally evolving hazard. Improving understanding of the behavior of these complex, gravitationally driven, multi-phase flows is key to mitigating the threat to communities at lahar-prone volcanoes. However, their complexity and evolving nature pose significant challenges to developing the models of flow behavior required for delineating their hazards and hazard zones.

  3. A molecular prognostic model predicts esophageal squamous cell carcinoma prognosis.

    Directory of Open Access Journals (Sweden)

    Hui-Hui Cao

    Full Text Available Esophageal squamous cell carcinoma (ESCC has the highest mortality rates in China. The 5-year survival rate of ESCC remains dismal despite improvements in treatments such as surgical resection and adjuvant chemoradiation, and current clinical staging approaches are limited in their ability to effectively stratify patients for treatment options. The aim of the present study, therefore, was to develop an immunohistochemistry-based prognostic model to improve clinical risk assessment for patients with ESCC.We developed a molecular prognostic model based on the combined expression of axis of epidermal growth factor receptor (EGFR, phosphorylated Specificity protein 1 (p-Sp1, and Fascin proteins. The presence of this prognostic model and associated clinical outcomes were analyzed for 130 formalin-fixed, paraffin-embedded esophageal curative resection specimens (generation dataset and validated using an independent cohort of 185 specimens (validation dataset.The expression of these three genes at the protein level was used to build a molecular prognostic model that was highly predictive of ESCC survival in both generation and validation datasets (P = 0.001. Regression analysis showed that this molecular prognostic model was strongly and independently predictive of overall survival (hazard ratio = 2.358 [95% CI, 1.391-3.996], P = 0.001 in generation dataset; hazard ratio = 1.990 [95% CI, 1.256-3.154], P = 0.003 in validation dataset. Furthermore, the predictive ability of these 3 biomarkers in combination was more robust than that of each individual biomarker.This technically simple immunohistochemistry-based molecular model accurately predicts ESCC patient survival and thus could serve as a complement to current clinical risk stratification approaches.

  4. Modeling Compound Flood Hazards in Coastal Embayments

    Science.gov (United States)

    Moftakhari, H.; Schubert, J. E.; AghaKouchak, A.; Luke, A.; Matthew, R.; Sanders, B. F.

    2017-12-01

    Coastal cities around the world are built on lowland topography adjacent to coastal embayments and river estuaries, where multiple factors threaten increasing flood hazards (e.g. sea level rise and river flooding). Quantitative risk assessment is required for administration of flood insurance programs and the design of cost-effective flood risk reduction measures. This demands a characterization of extreme water levels such as 100 and 500 year return period events. Furthermore, hydrodynamic flood models are routinely used to characterize localized flood level intensities (i.e., local depth and velocity) based on boundary forcing sampled from extreme value distributions. For example, extreme flood discharges in the U.S. are estimated from measured flood peaks using the Log-Pearson Type III distribution. However, configuring hydrodynamic models for coastal embayments is challenging because of compound extreme flood events: events caused by a combination of extreme sea levels, extreme river discharges, and possibly other factors such as extreme waves and precipitation causing pluvial flooding in urban developments. Here, we present an approach for flood risk assessment that coordinates multivariate extreme analysis with hydrodynamic modeling of coastal embayments. First, we evaluate the significance of correlation structure between terrestrial freshwater inflow and oceanic variables; second, this correlation structure is described using copula functions in unit joint probability domain; and third, we choose a series of compound design scenarios for hydrodynamic modeling based on their occurrence likelihood. The design scenarios include the most likely compound event (with the highest joint probability density), preferred marginal scenario and reproduced time series of ensembles based on Monte Carlo sampling of bivariate hazard domain. The comparison between resulting extreme water dynamics under the compound hazard scenarios explained above provides an insight to the

  5. Transferring the Malaria Epidemic Prediction Model to Users in East ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Transferring the Malaria Epidemic Prediction Model to Users in East Africa. In the highlands of East Africa, epidemic malaria is an emerging climate-related hazard that urgently needs addressing. Malaria incidence increased by 337% during the 1987 epidemic in Rwanda. In Tanzania, Uganda and Kenya, malaria incidence ...

  6. Proportional hazards models of infrastructure system recovery

    International Nuclear Information System (INIS)

    Barker, Kash; Baroud, Hiba

    2014-01-01

    As emphasis is being placed on a system's ability to withstand and to recover from a disruptive event, collectively referred to as dynamic resilience, there exists a need to quantify a system's ability to bounce back after a disruptive event. This work applies a statistical technique from biostatistics, the proportional hazards model, to describe (i) the instantaneous rate of recovery of an infrastructure system and (ii) the likelihood that recovery occurs prior to a given point in time. A major benefit of the proportional hazards model is its ability to describe a recovery event as a function of time as well as covariates describing the infrastructure system or disruptive event, among others, which can also vary with time. The proportional hazards approach is illustrated with a publicly available electric power outage data set

  7. Artificial neural networks versus proportional hazards Cox models to predict 45-year all-cause mortality in the Italian Rural Areas of the Seven Countries Study

    Directory of Open Access Journals (Sweden)

    Puddu Paolo

    2012-07-01

    Full Text Available Abstract Background Projection pursuit regression, multilayer feed-forward networks, multivariate adaptive regression splines and trees (including survival trees have challenged classic multivariable models such as the multiple logistic function, the proportional hazards life table Cox model (Cox, the Poisson’s model, and the Weibull’s life table model to perform multivariable predictions. However, only artificial neural networks (NN have become popular in medical applications. Results We compared several Cox versus NN models in predicting 45-year all-cause mortality (45-ACM by 18 risk factors selected a priori: age; father life status; mother life status; family history of cardiovascular diseases; job-related physical activity; cigarette smoking; body mass index (linear and quadratic terms; arm circumference; mean blood pressure; heart rate; forced expiratory volume; serum cholesterol; corneal arcus; diagnoses of cardiovascular diseases, cancer and diabetes; minor ECG abnormalities at rest. Two Italian rural cohorts of the Seven Countries Study, made up of men aged 40 to 59 years, enrolled and first examined in 1960 in Italy. Cox models were estimated by: a forcing all factors; b a forward-; and c a backward-stepwise procedure. Observed cases of deaths and of survivors were computed in decile classes of estimated risk. Forced and stepwise NN were run and compared by C-statistics (ROC analysis with the Cox models. Out of 1591 men, 1447 died. Model global accuracies were extremely high by all methods (ROCs > 0.810 but there was no clear-cut superiority of any model to predict 45-ACM. The highest ROCs (> 0.838 were observed by NN. There were inter-model variations to select predictive covariates: whereas all models concurred to define the role of 10 covariates (mainly cardiovascular risk factors, family history, heart rate and minor ECG abnormalities were not contributors by Cox models but were so by forced NN. Forced expiratory volume and arm

  8. The 2014 United States National Seismic Hazard Model

    Science.gov (United States)

    Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Mueller, Charles; Haller, Kathleen; Frankel, Arthur; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen; Boyd, Oliver; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nicolas; Wheeler, Russell; Williams, Robert; Olsen, Anna H.

    2015-01-01

    New seismic hazard maps have been developed for the conterminous United States using the latest data, models, and methods available for assessing earthquake hazard. The hazard models incorporate new information on earthquake rupture behavior observed in recent earthquakes; fault studies that use both geologic and geodetic strain rate data; earthquake catalogs through 2012 that include new assessments of locations and magnitudes; earthquake adaptive smoothing models that more fully account for the spatial clustering of earthquakes; and 22 ground motion models, some of which consider more than double the shaking data applied previously. Alternative input models account for larger earthquakes, more complicated ruptures, and more varied ground shaking estimates than assumed in earlier models. The ground motions, for levels applied in building codes, differ from the previous version by less than ±10% over 60% of the country, but can differ by ±50% in localized areas. The models are incorporated in insurance rates, risk assessments, and as input into the U.S. building code provisions for earthquake ground shaking.

  9. Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model

    Science.gov (United States)

    Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza

    2017-08-01

    Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.

  10. A semiparametric hazard model of activity timing and sequencing decisions during visits to theme parks using experimental design data

    NARCIS (Netherlands)

    Kemperman, A.D.A.M.; Borgers, A.W.J.; Timmermans, H.J.P.

    2002-01-01

    In this study we introduce a semi parametric hazard-based duration model to predict the timing and sequence of theme park visitors' activity choice behavior. The model is estimated on the basis of observations of consumer choices in various hypothetical theme parks. These parks are constructed by

  11. Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy

    Directory of Open Access Journals (Sweden)

    J. Blahut

    2010-11-01

    Full Text Available Debris flow hazard modelling at medium (regional scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal, and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy. The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R, developed at the University of Lausanne (Switzerland. An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise

  12. Report 6: Guidance document. Man-made hazards and Accidental Aircraft Crash hazards modelling and implementation in extended PSA

    International Nuclear Information System (INIS)

    Kahia, S.; Brinkman, H.; Bareith, A.; Siklossy, T.; Vinot, T.; Mateescu, T.; Espargilliere, J.; Burgazzi, L.; Ivanov, I.; Bogdanov, D.; Groudev, P.; Ostapchuk, S.; Zhabin, O.; Stojka, T.; Alzbutas, R.; Kumar, M.; Nitoi, M.; Farcasiu, M.; Borysiewicz, M.; Kowal, K.; Potempski, S.

    2016-01-01

    The goal of this report is to provide guidance on practices to model man-made hazards (mainly external fires and explosions) and accidental aircraft crash hazards and implement them in extended Level 1 PSA. This report is a joint deliverable of work package 21 (WP21) and work package 22 (WP22). The general objective of WP21 is to provide guidance on all of the individual hazards selected at the first ASAMPSA-E End Users Workshop (May 2014, Uppsala, Sweden). The objective of WP22 is to provide the solutions for purposes of different parts of man-made hazards Level 1 PSA fulfilment. This guidance is focusing on man-made hazards, namely: external fires and explosions, and accidental aircraft crash hazards. Guidance developed refers to existing guidance whenever possible. The initial part of guidance (WP21 part) reflects current practices to assess the frequencies for each type of hazards or combination of hazards (including correlated hazards) as initiating event for PSAs. The sources and quality of hazard data, the elements of hazard assessment methodologies and relevant examples are discussed. Classification and criteria to properly assess hazard combinations as well as examples and methods for assessment of these combinations are included in this guidance. In appendixes additional material is presented with the examples of practical approaches to aircraft crash and man-made hazard. The following issues are addressed: 1) Hazard assessment methodologies, including issues related to hazard combinations. 2) Modelling equipment of safety related SSC, 3) HRA, 4) Emergency response, 5) Multi-unit issues. Recommendations and also limitations, gaps identified in the existing methodologies and a list of open issues are included. At all stages of this guidance and especially from an industrial end-user perspective, one must keep in mind that the development of man-made hazards probabilistic analysis must be conditioned to the ability to ultimately obtain a representative risk

  13. Conceptual Development of a National Volcanic Hazard Model for New Zealand

    Science.gov (United States)

    Stirling, Mark; Bebbington, Mark; Brenna, Marco; Cronin, Shane; Christophersen, Annemarie; Deligne, Natalia; Hurst, Tony; Jolly, Art; Jolly, Gill; Kennedy, Ben; Kereszturi, Gabor; Lindsay, Jan; Neall, Vince; Procter, Jonathan; Rhoades, David; Scott, Brad; Shane, Phil; Smith, Ian; Smith, Richard; Wang, Ting; White, James D. L.; Wilson, Colin J. N.; Wilson, Tom

    2017-06-01

    We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g. short-term forecasting). The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.

  14. Conceptual Development of a National Volcanic Hazard Model for New Zealand

    Directory of Open Access Journals (Sweden)

    Mark Stirling

    2017-06-01

    Full Text Available We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration, and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g., short-term forecasting. The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.

  15. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Directory of Open Access Journals (Sweden)

    Bang Wool Eom

    Full Text Available Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea.Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope.During a median of 11.4 years of follow-up, 19,465 (1.4% and 5,579 (0.7% newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women.In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  16. Comparison of joint modeling and landmarking for dynamic prediction under an illness-death model.

    Science.gov (United States)

    Suresh, Krithika; Taylor, Jeremy M G; Spratt, Daniel E; Daignault, Stephanie; Tsodikov, Alexander

    2017-11-01

    Dynamic prediction incorporates time-dependent marker information accrued during follow-up to improve personalized survival prediction probabilities. At any follow-up, or "landmark", time, the residual time distribution for an individual, conditional on their updated marker values, can be used to produce a dynamic prediction. To satisfy a consistency condition that links dynamic predictions at different time points, the residual time distribution must follow from a prediction function that models the joint distribution of the marker process and time to failure, such as a joint model. To circumvent the assumptions and computational burden associated with a joint model, approximate methods for dynamic prediction have been proposed. One such method is landmarking, which fits a Cox model at a sequence of landmark times, and thus is not a comprehensive probability model of the marker process and the event time. Considering an illness-death model, we derive the residual time distribution and demonstrate that the structure of the Cox model baseline hazard and covariate effects under the landmarking approach do not have simple form. We suggest some extensions of the landmark Cox model that should provide a better approximation. We compare the performance of the landmark models with joint models using simulation studies and cognitive aging data from the PAQUID study. We examine the predicted probabilities produced under both methods using data from a prostate cancer study, where metastatic clinical failure is a time-dependent covariate for predicting death following radiation therapy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Contribution of physical modelling to climate-driven landslide hazard mapping: an alpine test site

    Science.gov (United States)

    Vandromme, R.; Desramaut, N.; Baills, A.; Hohmann, A.; Grandjean, G.; Sedan, O.; Mallet, J. P.

    2012-04-01

    The aim of this work is to develop a methodology for integrating climate change scenarios into quantitative hazard assessment and especially their precipitation component. The effects of climate change will be different depending on both the location of the site and the type of landslide considered. Indeed, mass movements can be triggered by different factors. This paper describes a methodology to address this issue and shows an application on an alpine test site. Mechanical approaches represent a solution for quantitative landslide susceptibility and hazard modeling. However, as the quantity and the quality of data are generally very heterogeneous at a regional scale, it is necessary to take into account the uncertainty in the analysis. In this perspective, a new hazard modeling method is developed and integrated in a program named ALICE. This program integrates mechanical stability analysis through a GIS software taking into account data uncertainty. This method proposes a quantitative classification of landslide hazard and offers a useful tool to gain time and efficiency in hazard mapping. However, an expertise approach is still necessary to finalize the maps. Indeed it is the only way to take into account some influent factors in slope stability such as heterogeneity of the geological formations or effects of anthropic interventions. To go further, the alpine test site (Barcelonnette area, France) is being used to integrate climate change scenarios into ALICE program, and especially their precipitation component with the help of a hydrological model (GARDENIA) and the regional climate model REMO (Jacob, 2001). From a DEM, land-cover map, geology, geotechnical data and so forth the program classifies hazard zones depending on geotechnics and different hydrological contexts varying in time. This communication, realized within the framework of Safeland project, is supported by the European Commission under the 7th Framework Programme for Research and Technological

  18. Development of a high-fidelity numerical model for hazard prediction in the urban environment

    International Nuclear Information System (INIS)

    Lien, F.S.; Yee, E.; Ji, H.; Keats, A.; Hsieh, K.J.

    2005-01-01

    The release of chemical, biological, radiological, or nuclear (CBRN) agents by terrorists or rogue states in a North American city (densely populated urban centre) and the subsequent exposure, deposition, and contamination are emerging threats in an uncertain world. The transport, dispersion, deposition, and fate of a CBRN agent released in an urban environment is an extremely complex problem that encompasses potentially multiple space and time scales. The availability of high-fidelity, time-dependent models for the prediction of a CBRN agent's movement and fate in a complex urban environment can provide the strongest technical and scientific foundation for support of Canada's more broadly based effort at advancing counter-terrorism planning and operational capabilities. The objective of this paper is to report the progress of developing and validating an integrated, state-of-the-art, high-fidelity multi-scale, multi-physics modeling system for the accurate and efficient prediction of urban flow and dispersion of CBRN materials. Development of this proposed multi-scale modeling system will provide the real-time modeling and simulation tool required to predict injuries, casualties, and contamination and to make relevant decisions (based on the strongest technical and scientific foundations) in order to minimize the consequences of a CBRN incident based on a pre-determined decision making framework. (author)

  19. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    Science.gov (United States)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  20. CyberShake: A Physics-Based Seismic Hazard Model for Southern California

    Science.gov (United States)

    Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.

    2011-01-01

    CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and

  1. Evaluating the hazard from Siding Spring dust: Models and predictions

    Science.gov (United States)

    Christou, A.

    2014-12-01

    Long-period comet C/2013 A1 (Siding Spring) will pass at a distance of ~140 thousand km (9e-4 AU) - about a third of a lunar distance - from the centre of Mars, closer to this planet than any known comet has come to the Earth since records began. Closest approach is expected to occur at 18:30 UT on the 19th October. This provides an opportunity for a ``free'' flyby of a different type of comet than those investigated by spacecraft so far, including comet 67P/Churyumov-Gerasimenko currently under scrutiny by the Rosetta spacecraft. At the same time, the passage of the comet through Martian space will create the opportunity to study the reaction of the planet's upper atmosphere to a known natural perturbation. The flip-side of the coin is the risk to Mars-orbiting assets, both existing (NASA's Mars Odyssey & Mars Reconnaissance Orbiter and ESA's Mars Express) and in transit (NASA's MAVEN and ISRO's Mangalyaan) by high-speed cometary dust potentially impacting spacecraft surfaces. Much work has already gone into assessing this hazard and devising mitigating measures in the precious little warning time given to characterise this object until Mars encounter. In this presentation, we will provide an overview of how the meteoroid stream and comet coma dust impact models evolved since the comet's discovery and discuss lessons learned should similar circumstances arise in the future.

  2. Multi-hazard risk analysis related to hurricanes

    Science.gov (United States)

    Lin, Ning

    Hurricanes present major hazards to the United States. Associated with extreme winds, heavy rainfall, and storm surge, landfalling hurricanes often cause enormous structural damage to coastal regions. Hurricane damage risk assessment provides the basis for loss mitigation and related policy-making. Current hurricane risk models, however, often oversimplify the complex processes of hurricane damage. This dissertation aims to improve existing hurricane risk assessment methodology by coherently modeling the spatial-temporal processes of storm landfall, hazards, and damage. Numerical modeling technologies are used to investigate the multiplicity of hazards associated with landfalling hurricanes. The application and effectiveness of current weather forecasting technologies to predict hurricane hazards is investigated. In particular, the Weather Research and Forecasting model (WRF), with Geophysical Fluid Dynamics Laboratory (GFDL)'s hurricane initialization scheme, is applied to the simulation of the wind and rainfall environment during hurricane landfall. The WRF model is further coupled with the Advanced Circulation (AD-CIRC) model to simulate storm surge in coastal regions. A case study examines the multiple hazards associated with Hurricane Isabel (2003). Also, a risk assessment methodology is developed to estimate the probability distribution of hurricane storm surge heights along the coast, particularly for data-scarce regions, such as New York City. This methodology makes use of relatively simple models, specifically a statistical/deterministic hurricane model and the Sea, Lake and Overland Surges from Hurricanes (SLOSH) model, to simulate large numbers of synthetic surge events, and conducts statistical analysis. The estimation of hurricane landfall probability and hazards are combined with structural vulnerability models to estimate hurricane damage risk. Wind-induced damage mechanisms are extensively studied. An innovative windborne debris risk model is

  3. Quantitative physical models of volcanic phenomena for hazards assessment of critical infrastructures

    Science.gov (United States)

    Costa, Antonio

    2016-04-01

    Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.

  4. A Bimodal Hybrid Model for Time-Dependent Probabilistic Seismic Hazard Analysis

    Science.gov (United States)

    Yaghmaei-Sabegh, Saman; Shoaeifar, Nasser; Shoaeifar, Parva

    2018-03-01

    The evaluation of evidence provided by geological studies and historical catalogs indicates that in some seismic regions and faults, multiple large earthquakes occur in cluster. Then, the occurrences of large earthquakes confront with quiescence and only the small-to-moderate earthquakes take place. Clustering of large earthquakes is the most distinguishable departure from the assumption of constant hazard of random occurrence of earthquakes in conventional seismic hazard analysis. In the present study, a time-dependent recurrence model is proposed to consider a series of large earthquakes that occurs in clusters. The model is flexible enough to better reflect the quasi-periodic behavior of large earthquakes with long-term clustering, which can be used in time-dependent probabilistic seismic hazard analysis with engineering purposes. In this model, the time-dependent hazard results are estimated by a hazard function which comprises three parts. A decreasing hazard of last large earthquake cluster and an increasing hazard of the next large earthquake cluster, along with a constant hazard of random occurrence of small-to-moderate earthquakes. In the final part of the paper, the time-dependent seismic hazard of the New Madrid Seismic Zone at different time intervals has been calculated for illustrative purpose.

  5. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

    DEFF Research Database (Denmark)

    He, Peng; Eriksson, Frank; Scheike, Thomas H.

    2016-01-01

    function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...

  6. A non-ideal model for predicting the effect of dissolved salt on the flash point of solvent mixtures.

    Science.gov (United States)

    Liaw, Horng-Jang; Wang, Tzu-Ai

    2007-03-06

    Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.

  7. Modeling and Hazard Analysis Using STPA

    Science.gov (United States)

    Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka

    2010-09-01

    A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis

  8. Do French macroseismic intensity observations agree with expectations from the European Seismic Hazard Model 2013?

    Science.gov (United States)

    Rey, Julien; Beauval, Céline; Douglas, John

    2018-02-01

    Probabilistic seismic hazard assessments are the basis of modern seismic design codes. To test fully a seismic hazard curve at the return periods of interest for engineering would require many thousands of years' worth of ground-motion recordings. Because strong-motion networks are often only a few decades old (e.g. in mainland France the first accelerometric network dates from the mid-1990s), data from such sensors can be used to test hazard estimates only at very short return periods. In this article, several hundreds of years of macroseismic intensity observations for mainland France are interpolated using a robust kriging-with-a-trend technique to establish the earthquake history of every French mainland municipality. At 24 selected cities representative of the French seismic context, the number of exceedances of intensities IV, V and VI is determined over time windows considered complete. After converting these intensities to peak ground accelerations using the global conversion equation of Caprio et al. (Ground motion to intensity conversion equations (GMICEs): a global relationship and evaluation of regional dependency, Bulletin of the Seismological Society of America 105:1476-1490, 2015), these exceedances are compared with those predicted by the European Seismic Hazard Model 2013 (ESHM13). In half of the cities, the number of observed exceedances for low intensities (IV and V) is within the range of predictions of ESHM13. In the other half of the cities, the number of observed exceedances is higher than the predictions of ESHM13. For intensity VI, the match is closer, but the comparison is less meaningful due to a scarcity of data. According to this study, the ESHM13 underestimates hazard in roughly half of France, even when taking into account the uncertainty in the conversion from intensity to acceleration. However, these results are valid only for the acceleration range tested in this study (0.01 to 0.09 g).

  9. Do French macroseismic intensity observations agree with expectations from the European Seismic Hazard Model 2013?

    Science.gov (United States)

    Rey, Julien; Beauval, Céline; Douglas, John

    2018-05-01

    Probabilistic seismic hazard assessments are the basis of modern seismic design codes. To test fully a seismic hazard curve at the return periods of interest for engineering would require many thousands of years' worth of ground-motion recordings. Because strong-motion networks are often only a few decades old (e.g. in mainland France the first accelerometric network dates from the mid-1990s), data from such sensors can be used to test hazard estimates only at very short return periods. In this article, several hundreds of years of macroseismic intensity observations for mainland France are interpolated using a robust kriging-with-a-trend technique to establish the earthquake history of every French mainland municipality. At 24 selected cities representative of the French seismic context, the number of exceedances of intensities IV, V and VI is determined over time windows considered complete. After converting these intensities to peak ground accelerations using the global conversion equation of Caprio et al. (Ground motion to intensity conversion equations (GMICEs): a global relationship and evaluation of regional dependency, Bulletin of the Seismological Society of America 105:1476-1490, 2015), these exceedances are compared with those predicted by the European Seismic Hazard Model 2013 (ESHM13). In half of the cities, the number of observed exceedances for low intensities (IV and V) is within the range of predictions of ESHM13. In the other half of the cities, the number of observed exceedances is higher than the predictions of ESHM13. For intensity VI, the match is closer, but the comparison is less meaningful due to a scarcity of data. According to this study, the ESHM13 underestimates hazard in roughly half of France, even when taking into account the uncertainty in the conversion from intensity to acceleration. However, these results are valid only for the acceleration range tested in this study (0.01 to 0.09 g).

  10. Study on integrated approach of Nuclear Accident Hazard Predicting, Warning, and Optimized Controlling System based on GIS

    International Nuclear Information System (INIS)

    Tang Lijuan; Huang Shunxiang; Wang Xinming

    2012-01-01

    The issue of nuclear safety becomes the attention focus of international society after the nuclear accident happened in Fukushima. Aiming at the requirements of the prevention and controlling of Nuclear Accident establishment of Nuclear Accident Hazard Predicting, Warning and optimized Controlling System (NAPWS) is a imperative project that our country and army are desiderating, which includes multiple fields of subject as nuclear physics, atmospheric science, security science, computer science and geographical information technology, etc. Multiplatform, multi-system and multi-mode are integrated effectively based on GIS, accordingly the Predicting, Warning, and Optimized Controlling technology System of Nuclear Accident Hazard is established. (authors)

  11. PREDICT: a new UK prognostic model that predicts survival following surgery for invasive breast cancer.

    Science.gov (United States)

    Wishart, Gordon C; Azzato, Elizabeth M; Greenberg, David C; Rashbass, Jem; Kearins, Olive; Lawrence, Gill; Caldas, Carlos; Pharoah, Paul D P

    2010-01-01

    The aim of this study was to develop and validate a prognostication model to predict overall and breast cancer specific survival for women treated for early breast cancer in the UK. Using the Eastern Cancer Registration and Information Centre (ECRIC) dataset, information was collated for 5,694 women who had surgery for invasive breast cancer in East Anglia from 1999 to 2003. Breast cancer mortality models for oestrogen receptor (ER) positive and ER negative tumours were derived from these data using Cox proportional hazards, adjusting for prognostic factors and mode of cancer detection (symptomatic versus screen-detected). An external dataset of 5,468 patients from the West Midlands Cancer Intelligence Unit (WMCIU) was used for validation. Differences in overall actual and predicted mortality were detection for the first time. The model is well calibrated, provides a high degree of discrimination and has been validated in a second UK patient cohort.

  12. Predicting Cumulative Incidence Probability: Marginal and Cause-Specific Modelling

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2005-01-01

    cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling......cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling...

  13. Quantitative occupational risk model: Single hazard

    International Nuclear Information System (INIS)

    Papazoglou, I.A.; Aneziris, O.N.; Bellamy, L.J.; Ale, B.J.M.; Oh, J.

    2017-01-01

    A model for the quantification of occupational risk of a worker exposed to a single hazard is presented. The model connects the working conditions and worker behaviour to the probability of an accident resulting into one of three types of consequence: recoverable injury, permanent injury and death. Working conditions and safety barriers in place to reduce the likelihood of an accident are included. Logical connections are modelled through an influence diagram. Quantification of the model is based on two sources of information: a) number of accidents observed over a period of time and b) assessment of exposure data of activities and working conditions over the same period of time and the same working population. Effectiveness of risk reducing measures affecting the working conditions, worker behaviour and/or safety barriers can be quantified through the effect of these measures on occupational risk. - Highlights: • Quantification of occupational risk from a single hazard. • Influence diagram connects working conditions, worker behaviour and safety barriers. • Necessary data include the number of accidents and the total exposure of worker • Effectiveness of risk reducing measures is quantified through the impact on the risk • An example illustrates the methodology.

  14. The 2014 update to the National Seismic Hazard Model in California

    Science.gov (United States)

    Powers, Peter; Field, Edward H.

    2015-01-01

    The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.

  15. Causal Mediation Analysis for the Cox Proportional Hazards Model with a Smooth Baseline Hazard Estimator.

    Science.gov (United States)

    Wang, Wei; Albert, Jeffrey M

    2017-08-01

    An important problem within the social, behavioral, and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model in this paper. We give precise definitions of the total effect, natural indirect effect, and natural direct effect in terms of the survival probability, hazard function, and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log cumulative hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson Heart Study data and conduct sensitivity analysis to assess the impact on the mediation effects inference when the no unmeasured mediator-outcome confounding assumption is violated.

  16. Predictive Models and Tools for Assessing Chemicals under the Toxic Substances Control Act (TSCA)

    Science.gov (United States)

    EPA has developed databases and predictive models to help evaluate the hazard, exposure, and risk of chemicals released to the environment and how workers, the general public, and the environment may be exposed to and affected by them.

  17. An enhanced fire hazard assessment model and validation experiments for vertical cable trays

    International Nuclear Information System (INIS)

    Li, Lu; Huang, Xianjia; Bi, Kun; Liu, Xiaoshuang

    2016-01-01

    Highlights: • An enhanced model was developed for vertical cable fire hazard assessment in NPP. • The validated experiments on vertical cable tray fires were conducted. • The capability of the model for cable tray with different cable spacing were tested. - Abstract: The model, referred to as FLASH-CAT (Flame Spread over Horizontal Cable Trays), was developed to estimate the heat release rate for vertical cable tray fire. The focus of this work is to investigate the application of an enhanced model to the single vertical cable tray fires with different cable spacing. The experiments on vertical cable tray fires with three typical cable spacing were conducted. The histories of mass loss rate and flame length were recorded during the cable fire. From the experimental results, it is found that the space between cable lines intensifies the cable combustion and accelerates the flame spread. The predictions by the enhanced model show good agreements with the experimental data. At the same time, it is shown that the enhanced model is capable of predicting the different behaviors of cable fires with different cable spacing by adjusting the flame spread speed only.

  18. An enhanced fire hazard assessment model and validation experiments for vertical cable trays

    Energy Technology Data Exchange (ETDEWEB)

    Li, Lu [Sate Key Laboratory of Fire Science, University of Science and Technology of China, Hefei 230027 (China); Huang, Xianjia, E-mail: huangxianjia@gziit.ac.cn [Joint Laboratory of Fire Safety in Nuclear Power Plants, Institute of Industry Technology Guangzhou & Chinese Academy of Sciences, Guangzhou 511458 (China); Bi, Kun; Liu, Xiaoshuang [China Nuclear Power Design Co., Ltd., Shenzhen 518045 (China)

    2016-05-15

    Highlights: • An enhanced model was developed for vertical cable fire hazard assessment in NPP. • The validated experiments on vertical cable tray fires were conducted. • The capability of the model for cable tray with different cable spacing were tested. - Abstract: The model, referred to as FLASH-CAT (Flame Spread over Horizontal Cable Trays), was developed to estimate the heat release rate for vertical cable tray fire. The focus of this work is to investigate the application of an enhanced model to the single vertical cable tray fires with different cable spacing. The experiments on vertical cable tray fires with three typical cable spacing were conducted. The histories of mass loss rate and flame length were recorded during the cable fire. From the experimental results, it is found that the space between cable lines intensifies the cable combustion and accelerates the flame spread. The predictions by the enhanced model show good agreements with the experimental data. At the same time, it is shown that the enhanced model is capable of predicting the different behaviors of cable fires with different cable spacing by adjusting the flame spread speed only.

  19. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    Science.gov (United States)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  20. Integrate urban‐scale seismic hazard analyses with the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Luco, Nicolas; Frankel, Arthur; Petersen, Mark D.; Aagaard, Brad T.; Baltay, Annemarie S.; Blanpied, Michael; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.; Graves, Robert; Hartzell, Stephen; Rezaeian, Sanaz; Stephenson, William J.; Wald, David J.; Williams, Robert A.; Withers, Kyle

    2018-01-01

    For more than 20 yrs, damage patterns and instrumental recordings have highlighted the influence of the local 3D geologic structure on earthquake ground motions (e.g., M">M 6.7 Northridge, California, Gao et al., 1996; M">M 6.9 Kobe, Japan, Kawase, 1996; M">M 6.8 Nisqually, Washington, Frankel, Carver, and Williams, 2002). Although this and other local‐scale features are critical to improving seismic hazard forecasts, historically they have not been explicitly incorporated into the U.S. National Seismic Hazard Model (NSHM, national model and maps), primarily because the necessary basin maps and methodologies were not available at the national scale. Instead,...

  1. Development of a flood-induced health risk prediction model for Africa

    Science.gov (United States)

    Lee, D.; Block, P. J.

    2017-12-01

    Globally, many floods occur in developing or tropical regions where the impact on public health is substantial, including death and injury, drinking water, endemic disease, and so on. Although these flood impacts on public health have been investigated, integrated management of floods and flood-induced health risks is technically and institutionally limited. Specifically, while the use of climatic and hydrologic forecasts for disaster management has been highlighted, analogous predictions for forecasting the magnitude and impact of health risks are lacking, as is the infrastructure for health early warning systems, particularly in developing countries. In this study, we develop flood-induced health risk prediction model for African regions using season-ahead flood predictions with climate drivers and a variety of physical and socio-economic information, such as local hazard, exposure, resilience, and health vulnerability indicators. Skillful prediction of flood and flood-induced health risks can contribute to practical pre- and post-disaster responses in both local- and global-scales, and may eventually be integrated into multi-hazard early warning systems for informed advanced planning and management. This is especially attractive for areas with limited observations and/or little capacity to develop flood-induced health risk warning systems.

  2. Improved Performance and Safety for High Energy Batteries Through Use of Hazard Anticipation and Capacity Prediction

    Science.gov (United States)

    Atwater, Terrill

    1993-01-01

    Prediction of the capacity remaining in used high rate, high energy batteries is important information to the user. Knowledge of the capacity remaining in used batteries results in better utilization. This translates into improved readiness and cost savings due to complete, efficient use. High rate batteries, due to their chemical nature, are highly sensitive to misuse (i.e., over discharge or very high rate discharge). Battery failure due to misuse or manufacturing defects could be disastrous. Since high rate, high energy batteries are expensive and energetic, a reliable method of predicting both failures and remaining energy has been actively sought. Due to concerns over safety, the behavior of lithium/sulphur dioxide cells at different temperatures and current drains was examined. The main thrust of this effort was to determine failure conditions for incorporation in hazard anticipation circuitry. In addition, capacity prediction formulas have been developed from test data. A process that performs continuous, real-time hazard anticipation and capacity prediction was developed. The introduction of this process into microchip technology will enable the production of reliable, safe, and efficient high energy batteries.

  3. Nonlinear joint models for individual dynamic prediction of risk of death using Hamiltonian Monte Carlo: application to metastatic prostate cancer

    Directory of Open Access Journals (Sweden)

    Solène Desmée

    2017-07-01

    Full Text Available Abstract Background Joint models of longitudinal and time-to-event data are increasingly used to perform individual dynamic prediction of a risk of event. However the difficulty to perform inference in nonlinear models and to calculate the distribution of individual parameters has long limited this approach to linear mixed-effect models for the longitudinal part. Here we use a Bayesian algorithm and a nonlinear joint model to calculate individual dynamic predictions. We apply this approach to predict the risk of death in metastatic castration-resistant prostate cancer (mCRPC patients with frequent Prostate-Specific Antigen (PSA measurements. Methods A joint model is built using a large population of 400 mCRPC patients where PSA kinetics is described by a biexponential function and the hazard function is a PSA-dependent function. Using Hamiltonian Monte Carlo algorithm implemented in Stan software and the estimated population parameters in this population as priors, the a posteriori distribution of the hazard function is computed for a new patient knowing his PSA measurements until a given landmark time. Time-dependent area under the ROC curve (AUC and Brier score are derived to assess discrimination and calibration of the model predictions, first on 200 simulated patients and then on 196 real patients that are not included to build the model. Results Satisfying coverage probabilities of Monte Carlo prediction intervals are obtained for longitudinal and hazard functions. Individual dynamic predictions provide good predictive performances for landmark times larger than 12 months and horizon time of up to 18 months for both simulated and real data. Conclusions As nonlinear joint models can characterize the kinetics of biomarkers and their link with a time-to-event, this approach could be useful to improve patient’s follow-up and the early detection of most at risk patients.

  4. Opinion: The use of natural hazard modeling for decision making under uncertainty

    Science.gov (United States)

    David E. Calkin; Mike Mentis

    2015-01-01

    Decision making to mitigate the effects of natural hazards is a complex undertaking fraught with uncertainty. Models to describe risks associated with natural hazards have proliferated in recent years. Concurrently, there is a growing body of work focused on developing best practices for natural hazard modeling and to create structured evaluation criteria for complex...

  5. Playing against nature: improving earthquake hazard mitigation

    Science.gov (United States)

    Stein, S. A.; Stein, J.

    2012-12-01

    The great 2011 Tohoku earthquake dramatically demonstrated the need to improve earthquake and tsunami hazard assessment and mitigation policies. The earthquake was much larger than predicted by hazard models, and the resulting tsunami overtopped coastal defenses, causing more than 15,000 deaths and $210 billion damage. Hence if and how such defenses should be rebuilt is a challenging question, because the defences fared poorly and building ones to withstand tsunamis as large as March's is too expensive,. A similar issue arises along the Nankai Trough to the south, where new estimates warning of tsunamis 2-5 times higher than in previous models raise the question of what to do, given that the timescale on which such events may occur is unknown. Thus in the words of economist H. Hori, "What should we do in face of uncertainty? Some say we should spend our resources on present problems instead of wasting them on things whose results are uncertain. Others say we should prepare for future unknown disasters precisely because they are uncertain". Thus society needs strategies to mitigate earthquake and tsunami hazards that make economic and societal sense, given that our ability to assess these hazards is poor, as illustrated by highly destructive earthquakes that often occur in areas predicted by hazard maps to be relatively safe. Conceptually, we are playing a game against nature "of which we still don't know all the rules" (Lomnitz, 1989). Nature chooses tsunami heights or ground shaking, and society selects the strategy to minimize the total costs of damage plus mitigation costs. As in any game of chance, we maximize our expectation value by selecting the best strategy, given our limited ability to estimate the occurrence and effects of future events. We thus outline a framework to find the optimal level of mitigation by balancing its cost against the expected damages, recognizing the uncertainties in the hazard estimates. This framework illustrates the role of the

  6. Bayes estimation of the general hazard rate model

    International Nuclear Information System (INIS)

    Sarhan, A.

    1999-01-01

    In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2

  7. Application of decision tree model for the ground subsidence hazard mapping near abandoned underground coal mines.

    Science.gov (United States)

    Lee, Saro; Park, Inhye

    2013-09-30

    Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. Copyright © 2013. Published by Elsevier Ltd.

  8. Modelling Multi Hazard Mapping in Semarang City Using GIS-Fuzzy Method

    Science.gov (United States)

    Nugraha, A. L.; Awaluddin, M.; Sasmito, B.

    2018-02-01

    One important aspect of disaster mitigation planning is hazard mapping. Hazard mapping can provide spatial information on the distribution of locations that are threatened by disaster. Semarang City as the capital of Central Java Province is one of the cities with high natural disaster intensity. Frequent natural disasters Semarang city is tidal flood, floods, landslides, and droughts. Therefore, Semarang City needs spatial information by doing multi hazard mapping to support disaster mitigation planning in Semarang City. Multi Hazards map modelling can be derived from parameters such as slope maps, rainfall, land use, and soil types. This modelling is done by using GIS method with scoring and overlay technique. However, the accuracy of modelling would be better if the GIS method is combined with Fuzzy Logic techniques to provide a good classification in determining disaster threats. The Fuzzy-GIS method will build a multi hazards map of Semarang city can deliver results with good accuracy and with appropriate threat class spread so as to provide disaster information for disaster mitigation planning of Semarang city. from the multi-hazard modelling using GIS-Fuzzy can be known type of membership that has a good accuracy is the type of membership Gauss with RMSE of 0.404 the smallest of the other membership and VAF value of 72.909% of the largest of the other membership.

  9. Global river flood hazard maps: hydraulic modelling methods and appropriate uses

    Science.gov (United States)

    Townend, Samuel; Smith, Helen; Molloy, James

    2014-05-01

    Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some

  10. Development and Validation of a Predictive Model for Functional Outcome After Stroke Rehabilitation: The Maugeri Model.

    Science.gov (United States)

    Scrutinio, Domenico; Lanzillo, Bernardo; Guida, Pietro; Mastropasqua, Filippo; Monitillo, Vincenzo; Pusineri, Monica; Formica, Roberto; Russo, Giovanna; Guarnaschelli, Caterina; Ferretti, Chiara; Calabrese, Gianluigi

    2017-12-01

    Prediction of outcome after stroke rehabilitation may help clinicians in decision-making and planning rehabilitation care. We developed and validated a predictive tool to estimate the probability of achieving improvement in physical functioning (model 1) and a level of independence requiring no more than supervision (model 2) after stroke rehabilitation. The models were derived from 717 patients admitted for stroke rehabilitation. We used multivariable logistic regression analysis to build each model. Then, each model was prospectively validated in 875 patients. Model 1 included age, time from stroke occurrence to rehabilitation admission, admission motor and cognitive Functional Independence Measure scores, and neglect. Model 2 included age, male gender, time since stroke onset, and admission motor and cognitive Functional Independence Measure score. Both models demonstrated excellent discrimination. In the derivation cohort, the area under the curve was 0.883 (95% confidence intervals, 0.858-0.910) for model 1 and 0.913 (95% confidence intervals, 0.884-0.942) for model 2. The Hosmer-Lemeshow χ 2 was 4.12 ( P =0.249) and 1.20 ( P =0.754), respectively. In the validation cohort, the area under the curve was 0.866 (95% confidence intervals, 0.840-0.892) for model 1 and 0.850 (95% confidence intervals, 0.815-0.885) for model 2. The Hosmer-Lemeshow χ 2 was 8.86 ( P =0.115) and 34.50 ( P =0.001), respectively. Both improvement in physical functioning (hazard ratios, 0.43; 0.25-0.71; P =0.001) and a level of independence requiring no more than supervision (hazard ratios, 0.32; 0.14-0.68; P =0.004) were independently associated with improved 4-year survival. A calculator is freely available for download at https://goo.gl/fEAp81. This study provides researchers and clinicians with an easy-to-use, accurate, and validated predictive tool for potential application in rehabilitation research and stroke management. © 2017 American Heart Association, Inc.

  11. A comparative analysis of hazard models for predicting debris flows in Madison County, VA

    Science.gov (United States)

    Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.

    2001-01-01

    During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).

  12. Development of a predictive model for 6 month survival in patients with venous thromboembolism and solid malignancy requiring IVC filter placement.

    Science.gov (United States)

    Huang, Steven Y; Odisio, Bruno C; Sabir, Sharjeel H; Ensor, Joe E; Niekamp, Andrew S; Huynh, Tam T; Kroll, Michael; Gupta, Sanjay

    2017-07-01

    Our purpose was to develop a predictive model for short-term survival (i.e. filter placement in patients with venous thromboembolism (VTE) and solid malignancy. Clinical and laboratory parameters were retrospectively reviewed for patients with solid malignancy who received a filter between January 2009 and December 2011 at a tertiary care cancer center. Multivariate Cox proportional hazards modeling was used to assess variables associated with 6 month survival following filter placement in patients with VTE and solid malignancy. Significant variables were used to generate a predictive model. 397 patients with solid malignancy received a filter during the study period. Three variables were associated with 6 month survival: (1) serum albumin [hazard ratio (HR) 0.496, P filter placement can be predicted from three patient variables. Our predictive model could be used to help physicians decide whether a permanent or retrievable filter may be more appropriate as well as to assess the risks and benefits for filter retrieval within the context of survival longevity in patients with cancer.

  13. Development of hydrogeological modelling approaches for assessment of consequences of hazardous accidents at nuclear power plants

    International Nuclear Information System (INIS)

    Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A.

    1994-07-01

    This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs

  14. Landslide Hazard Assessment and Mapping in the Guil Catchment (Queyras, Southern French Alps): From Landslide Inventory to Susceptibility Modelling

    Science.gov (United States)

    Roulleau, Louise; Bétard, François; Carlier, Benoît; Lissak, Candide; Fort, Monique

    2016-04-01

    Landslides are common natural hazards in the Southern French Alps, where they may affect human lives and cause severe damages to infrastructures. As a part of the SAMCO research project dedicated to risk evaluation in mountain areas, this study focuses on the Guil river catchment (317 km2), Queyras, to assess landslide hazard poorly studied until now. In that area, landslides are mainly occasional, low amplitude phenomena, with limited direct impacts when compared to other hazards such as floods or snow avalanches. However, when interacting with floods during extreme rainfall events, landslides may have indirect consequences of greater importance because of strong hillslope-channel connectivity along the Guil River and its tributaries (i.e. positive feedbacks). This specific morphodynamic functioning reinforces the need to have a better understanding of landslide hazards and their spatial distribution at the catchment scale to prevent local population from disasters with multi-hazard origin. The aim of this study is to produce a landslide susceptibility mapping at 1:50 000 scale as a first step towards global estimation of landslide hazard and risk. The three main methodologies used for assessing landslide susceptibility are qualitative (i.e. expert opinion), deterministic (i.e. physics-based models) and statistical methods (i.e. probabilistic models). Due to the rapid development of geographical information systems (GIS) during the last two decades, statistical methods are today widely used because they offer a greater objectivity and reproducibility at large scales. Among them, multivariate analyses are considered as the most robust techniques, especially the logistic regression method commonly used in landslide susceptibility mapping. However, this method like others is strongly dependent on the accuracy of the input data to avoid significant errors in the final results. In particular, a complete and accurate landslide inventory is required before the modelling

  15. Religiousness and hazardous alcohol use: a conditional indirect effects model.

    Science.gov (United States)

    Jankowski, Peter J; Hardy, Sam A; Zamboanga, Byron L; Ham, Lindsay S

    2013-08-01

    The current study examined a conditional indirect effects model of the association between religiousness and adolescents' hazardous alcohol use. In doing so, we responded to the need to include both mediators and moderators, and the need for theoretically informed models when examining religiousness and adolescents' alcohol use. The sample consisted of 383 adolescents, aged 15-18, who completed an online questionnaire. Results of structural equation modeling supported the proposed model. Religiousness was indirectly associated with hazardous alcohol use through both positive alcohol expectancy outcomes and negative alcohol expectancy valuations. Significant moderating effects for alcohol expectancy valuations on the association between alcohol expectancies and alcohol use were also found. The effects for alcohol expectancy valuations confirm valuations as a distinct construct to that of alcohol expectancy outcomes, and offer support for the protective role of internalized religiousness on adolescents' hazardous alcohol use as a function of expectancy valuations. Copyright © 2013 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  16. Rockfall hazard analysis using LiDAR and spatial modeling

    Science.gov (United States)

    Lan, Hengxing; Martin, C. Derek; Zhou, Chenghu; Lim, Chang Ho

    2010-05-01

    Rockfalls have been significant geohazards along the Canadian Class 1 Railways (CN Rail and CP Rail) since their construction in the late 1800s. These rockfalls cause damage to infrastructure, interruption of business, and environmental impacts, and their occurrence varies both spatially and temporally. The proactive management of these rockfall hazards requires enabling technologies. This paper discusses a hazard assessment strategy for rockfalls along a section of a Canadian railway using LiDAR and spatial modeling. LiDAR provides accurate topographical information of the source area of rockfalls and along their paths. Spatial modeling was conducted using Rockfall Analyst, a three dimensional extension to GIS, to determine the characteristics of the rockfalls in terms of travel distance, velocity and energy. Historical rockfall records were used to calibrate the physical characteristics of the rockfall processes. The results based on a high-resolution digital elevation model from a LiDAR dataset were compared with those based on a coarse digital elevation model. A comprehensive methodology for rockfall hazard assessment is proposed which takes into account the characteristics of source areas, the physical processes of rockfalls and the spatial attribution of their frequency and energy.

  17. Forecasting extreme temperature health hazards in Europe

    Science.gov (United States)

    Di Napoli, Claudia; Pappenberger, Florian; Cloke, Hannah L.

    2017-04-01

    Extreme hot temperatures, such as those experienced during a heat wave, represent a dangerous meteorological hazard to human health. Heat disorders such as sunstroke are harmful to people of all ages and responsible for excess mortality in the affected areas. In 2003 more than 50,000 people died in western and southern Europe because of a severe and sustained episode of summer heat [1]. Furthermore, according to the Intergovernmental Panel on Climate Change heat waves are expected to get more frequent in the future thus posing an increasing threat to human lives. Developing appropriate tools for extreme hot temperatures prediction is therefore mandatory to increase public preparedness and mitigate heat-induced impacts. A recent study has shown that forecasts of the Universal Thermal Climate Index (UTCI) provide a valid overview of extreme temperature health hazards on a global scale [2]. UTCI is a parameter related to the temperature of the human body and its regulatory responses to the surrounding atmospheric environment. UTCI is calculated using an advanced thermo-physiological model that includes the human heat budget, physiology and clothing. To forecast UTCI the model uses meteorological inputs, such as 2m air temperature, 2m water vapour pressure and wind velocity at body height derived from 10m wind speed, from NWP models. Here we examine the potential of UTCI as an extreme hot temperature prediction tool for the European area. UTCI forecasts calculated using above-mentioned parameters from ECMWF models are presented. The skill in predicting UTCI for medium lead times is also analysed and discussed for implementation to international health-hazard warning systems. This research is supported by the ANYWHERE project (EnhANcing emergencY management and response to extreme WeatHER and climate Events) which is funded by the European Commission's HORIZON2020 programme. [1] Koppe C. et al., Heat waves: risks and responses. World Health Organization. Health and

  18. Modeling of Marine Natural Hazards in the Lesser Antilles

    Science.gov (United States)

    Zahibo, Narcisse; Nikolkina, Irina; Pelinovsky, Efim

    2010-05-01

    The Caribbean Sea countries are often affected by various marine natural hazards: hurricanes and cyclones, tsunamis and flooding. The historical data of marine natural hazards for the Lesser Antilles and specially, for Guadeloupe are presented briefly. Numerical simulation of several historical tsunamis in the Caribbean Sea (1755 Lisbon trans-Atlantic tsunami, 1867 Virgin Island earthquake tsunami, 2003 Montserrat volcano tsunami) are performed within the framework of the nonlinear-shallow theory. Numerical results demonstrate the importance of the real bathymetry variability with respect to the direction of propagation of tsunami wave and its characteristics. The prognostic tsunami wave height distribution along the Caribbean Coast is computed using various forms of seismic and hydrodynamics sources. These results are used to estimate the far-field potential for tsunami hazards at coastal locations in the Caribbean Sea. The nonlinear shallow-water theory is also applied to model storm surges induced by tropical cyclones, in particular, cyclones "Lilli" in 2002 and "Dean" in 2007. Obtained results are compared with observed data. The numerical models have been tested against known analytical solutions of the nonlinear shallow-water wave equations. Obtained results are described in details in [1-7]. References [1] N. Zahibo and E. Pelinovsky, Natural Hazards and Earth System Sciences, 1, 221 (2001). [2] N. Zahibo, E. Pelinovsky, A. Yalciner, A. Kurkin, A. Koselkov and A. Zaitsev, Oceanologica Acta, 26, 609 (2003). [3] N. Zahibo, E. Pelinovsky, A. Kurkin and A. Kozelkov, Science Tsunami Hazards. 21, 202 (2003). [4] E. Pelinovsky, N. Zahibo, P. Dunkley, M. Edmonds, R. Herd, T. Talipova, A. Kozelkov and I. Nikolkina, Science of Tsunami Hazards, 22, 44 (2004). [5] N. Zahibo, E. Pelinovsky, E. Okal, A. Yalciner, C. Kharif, T. Talipova and A. Kozelkov, Science of Tsunami Hazards, 23, 25 (2005). [6] N. Zahibo, E. Pelinovsky, T. Talipova, A. Rabinovich, A. Kurkin and I

  19. Fuzzy Cognitive Maps for Glacier Hazards Assessment: Application to Predicting the Potential for Glacier Lake Outbursts

    Science.gov (United States)

    Furfaro, R.; Kargel, J. S.; Fink, W.; Bishop, M. P.

    2010-12-01

    Glaciers and ice sheets are among the largest unstable parts of the solid Earth. Generally, glaciers are devoid of resources (other than water), are dangerous, are unstable and no infrastructure is normally built directly on their surfaces. Areas down valley from large alpine glaciers are also commonly unstable due to landslide potential of moraines, debris flows, snow avalanches, outburst floods from glacier lakes, and other dynamical alpine processes; yet there exists much development and human occupation of some disaster-prone areas. Satellite remote sensing can be extremely effective in providing cost-effective and time- critical information. Space-based imagery can be used to monitor glacier outlines and their lakes, including processes such as iceberg calving and debris accumulation, as well as changing thicknesses and flow speeds. Such images can also be used to make preliminary identifications of specific hazardous spots and allows preliminary assessment of possible modes of future disaster occurrence. Autonomous assessment of glacier conditions and their potential for hazards would present a major advance and permit systematized analysis of more data than humans can assess. This technical leap will require the design and implementation of Artificial Intelligence (AI) algorithms specifically designed to mimic glacier experts’ reasoning. Here, we introduce the theory of Fuzzy Cognitive Maps (FCM) as an AI tool for predicting and assessing natural hazards in alpine glacier environments. FCM techniques are employed to represent expert knowledge of glaciers physical processes. A cognitive model embedded in a fuzzy logic framework is constructed via the synergistic interaction between glaciologists and AI experts. To verify the effectiveness of the proposed AI methodology as applied to predicting hazards in glacier environments, we designed and implemented a FCM that addresses the challenging problem of autonomously assessing the Glacier Lake Outburst Flow

  20. Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models

    Science.gov (United States)

    Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin

    2017-01-01

    In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384

  1. Predictions of asteroid hazard to the Earth for the 21st century

    Science.gov (United States)

    Petrov, Nikita; Sokolov, Leonid; Polyakhova, Elena; Oskina, Kristina

    2018-05-01

    Early detection and investigation of possible collisions and close approaches of asteroids with the Earth are necessary to exept the asteroid-comet hazard. The difficulty of prediction of close approaches and collisions associated with resonant returns after encounters with the Earth due to loss of precision in these encounters. The main research object is asteroid Apophis (99942), for which we found many possible orbits of impacts associated with resonant returns. It is shown that the early orbit change of Apophis allows to avoid main impacts, associated with resonant returns. Such a change of the orbit, in principle, is feasible. We also study the possible impacts with the Ground asteroid 2015 RN35. We present 21 possible collisions in this century, including 7 collisions with large gaps presented in NASA website. The results of observations by the telescope ZA-320M at Pulkovo Obser-vatory of the three near-Earth asteroids, namely, 7822, 20826, 68216, two of which 7822 and 68216 are potentially hazardous, are presented.

  2. Geospatial subsidence hazard modelling at Sterkfontein Caves ...

    African Journals Online (AJOL)

    The geo-hazard subsidence model includes historic subsidence occurrances, terrain (water flow) and water accumulation. Water accumulating on the surface will percolate and reduce the strength of the soil mass, possibly inducing subsidence. Areas for further geotechnical investigation are identified, demonstrating that a ...

  3. Lin Receives 2010 Natural Hazards Focus Group Award for Graduate Research

    Science.gov (United States)

    2010-11-01

    Ning Lin has been awarded the Natural Hazards Focus Group Award for Graduate Research, given annually to a recent Ph.D. recipient for outstanding contributions to natural hazards research. Lin's thesis is entitled “Multi-hazard risk analysis related to hurricanes.” She is scheduled to present an invited talk in the Extreme Natural Events: Modeling, Prediction, and Mitigation session (NH20) during the 2010 AGU Fall Meeting, held 13-17 December in San Francisco, Calif. Lin will be formally presented with the award at the Natural Hazards focus group reception on 14 December 2010.

  4. Mathematical model for prediction of droplet sizes and distribution associated with impact of liquid-containing projectile

    International Nuclear Information System (INIS)

    Shelke, Ashish V.; Gera, B.; Maheshwari, N.K.; Singh, R.K.

    2018-01-01

    After the events of 9/11, the impact of fast flying commercial aircraft is considered as major hazard threatening the Nuclear Power Plant's (NPP) safety. The study of fuel spillage phenomenon and fireball formation is important to understand fire hazards due to burning of dispersed aviation fuel. The detailed analysis of fuel dispersion is very difficult to deliberate because both, large NPP structures and the large size of commercial aircrafts. Sandia National Laboratories, USA conducted impact tests using cylindrical projectiles filled with water to measure the associated parameters. Due to combustion properties and volatile nature of hydrocarbon fuels, the obtained parameters from impact studies using water are incomplete in fire analysis of flammable droplet clouds. A mathematical model is developed for prediction of droplet sizes and distribution associated with the impact of a liquid-containing projectile. The model can predict the transient behavior of droplet cloud. It is validated with experimental data available in literature. In the present study, the analysis has been performed using water and kerosene. The data obtained can be utilized as boundary and initial condition for CFD analysis. This information is useful for fire hazard analysis of aircraft impacts on NPP structures.

  5. Teamwork tools and activities within the hazard component of the Global Earthquake Model

    Science.gov (United States)

    Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.

    2013-05-01

    The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.

  6. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature...... disaggregation model that considers the uncertainty in the disaggregation, taking basis in the scaled Dirichlet distribution. The proposed probabilistic disaggregation model is applied to a portfolio of residential buildings in the Canton Bern, Switzerland, subject to flood risk. Thereby, the model is verified...... are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is generally imperfect, uncertainty arises in disaggregation. This paper therefore proposes a probabilistic...

  7. Autonomous Soil Assessment System: A Data-Driven Approach to Planetary Mobility Hazard Detection

    Science.gov (United States)

    Raimalwala, K.; Faragalli, M.; Reid, E.

    2018-04-01

    The Autonomous Soil Assessment System predicts mobility hazards for rovers. Its development and performance are presented, with focus on its data-driven models, machine learning algorithms, and real-time sensor data fusion for predictive analytics.

  8. Predictive Models of target organ and Systemic toxicities (BOSC)

    Science.gov (United States)

    The objective of this work is to predict the hazard classification and point of departure (PoD) of untested chemicals in repeat-dose animal testing studies. We used supervised machine learning to objectively evaluate the predictive accuracy of different classification and regress...

  9. A conflict model for the international hazardous waste disposal dispute

    International Nuclear Information System (INIS)

    Hu Kaixian; Hipel, Keith W.; Fang, Liping

    2009-01-01

    A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.

  10. A conflict model for the international hazardous waste disposal dispute.

    Science.gov (United States)

    Hu, Kaixian; Hipel, Keith W; Fang, Liping

    2009-12-15

    A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.

  11. Spatial prediction models for landslide hazards: review, comparison and evaluation

    Directory of Open Access Journals (Sweden)

    A. Brenning

    2005-01-01

    Full Text Available The predictive power of logistic regression, support vector machines and bootstrap-aggregated classification trees (bagging, double-bagging is compared using misclassification error rates on independent test data sets. Based on a resampling approach that takes into account spatial autocorrelation, error rates for predicting 'present' and 'future' landslides are estimated within and outside the training area. In a case study from the Ecuadorian Andes, logistic regression with stepwise backward variable selection yields lowest error rates and demonstrates the best generalization capabilities. The evaluation outside the training area reveals that tree-based methods tend to overfit the data.

  12. COMPARISON of FUZZY-BASED MODELS in LANDSLIDE HAZARD MAPPING

    Directory of Open Access Journals (Sweden)

    N. Mijani

    2017-09-01

    Full Text Available Landslide is one of the main geomorphic processes which effects on the development of prospect in mountainous areas and causes disastrous accidents. Landslide is an event which has different uncertain criteria such as altitude, slope, aspect, land use, vegetation density, precipitation, distance from the river and distance from the road network. This research aims to compare and evaluate different fuzzy-based models including Fuzzy Analytic Hierarchy Process (Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR. The main contribution of this paper reveals to the comprehensive criteria causing landslide hazard considering their uncertainties and comparison of different fuzzy-based models. The quantify of evaluation process are calculated by Density Ratio (DR and Quality Sum (QS. The proposed methodology implemented in Sari, one of the city of Iran which has faced multiple landslide accidents in recent years due to the particular environmental conditions. The achieved results of accuracy assessment based on the quantifier strated that Fuzzy-AHP model has higher accuracy compared to other two models in landslide hazard zonation. Accuracy of zoning obtained from Fuzzy-AHP model is respectively 0.92 and 0.45 based on method Precision (P and QS indicators. Based on obtained landslide hazard maps, Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR respectively cover 13, 26 and 35 percent of the study area with a very high risk level. Based on these findings, fuzzy-AHP model has been selected as the most appropriate method of zoning landslide in the city of Sari and the Fuzzy-gamma method with a minor difference is in the second order.

  13. Ensemble of ground subsidence hazard maps using fuzzy logic

    Science.gov (United States)

    Park, Inhye; Lee, Jiyeong; Saro, Lee

    2014-06-01

    Hazard maps of ground subsidence around abandoned underground coal mines (AUCMs) in Samcheok, Korea, were constructed using fuzzy ensemble techniques and a geographical information system (GIS). To evaluate the factors related to ground subsidence, a spatial database was constructed from topographic, geologic, mine tunnel, land use, groundwater, and ground subsidence maps. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 70/30 for training and validation of the models. The relationships between the detected ground-subsidence area and the factors were identified and quantified by frequency ratio (FR), logistic regression (LR) and artificial neural network (ANN) models. The relationships were used as factor ratings in the overlay analysis to create ground-subsidence hazard indexes and maps. The three GSH maps were then used as new input factors and integrated using fuzzy-ensemble methods to make better hazard maps. All of the hazard maps were validated by comparison with known subsidence areas that were not used directly in the analysis. As the result, the ensemble model was found to be more effective in terms of prediction accuracy than the individual model.

  14. Hazard analysis of typhoon-related external events using extreme value theory

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yo Chan; Jang, Seung Cheol [Integrated Safety Assessment Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lim, Tae Jin [Dept. of Industrial Information Systems Engineering, Soongsil University, Seoul (Korea, Republic of)

    2015-02-15

    After the Fukushima accident, the importance of hazard analysis for extreme external events was raised. To analyze typhoon-induced hazards, which are one of the significant disasters of East Asian countries, a statistical analysis using the extreme value theory, which is a method for estimating the annual exceedance frequency of a rare event, was conducted for an estimation of the occurrence intervals or hazard levels. For the four meteorological variables, maximum wind speed, instantaneous wind speed, hourly precipitation, and daily precipitation, the parameters of the predictive extreme value theory models were estimated. The 100-year return levels for each variable were predicted using the developed models and compared with previously reported values. It was also found that there exist significant long-term climate changes of wind speed and precipitation. A fragility analysis should be conducted to ensure the safety levels of a nuclear power plant for high levels of wind speed and precipitation, which exceed the results of a previous analysis.

  15. Analyzing Right-Censored Length-Biased Data with Additive Hazards Model

    Institute of Scientific and Technical Information of China (English)

    Mu ZHAO; Cun-jie LIN; Yong ZHOU

    2017-01-01

    Length-biased data are often encountered in observational studies,when the survival times are left-truncated and right-censored and the truncation times follow a uniform distribution.In this article,we propose to analyze such data with the additive hazards model,which specifies that the hazard function is the sum of an arbitrary baseline hazard function and a regression function of covariates.We develop estimating equation approaches to estimate the regression parameters.The resultant estimators are shown to be consistent and asymptotically normal.Some simulation studies and a real data example are used to evaluate the finite sample properties of the proposed estimators.

  16. Risk score prediction model for dementia in patients with type 2 diabetes.

    Science.gov (United States)

    Li, Chia-Ing; Li, Tsai-Chung; Liu, Chiu-Shong; Liao, Li-Na; Lin, Wen-Yuan; Lin, Chih-Hsueh; Yang, Sing-Yu; Chiang, Jen-Huai; Lin, Cheng-Chieh

    2018-03-30

    No study established a prediction dementia model in the Asian populations. This study aims to develop a prediction model for dementia in Chinese type 2 diabetes patients. This retrospective cohort study included 27,540 Chinese type 2 diabetes patients (aged 50-94 years) enrolled in Taiwan National Diabetes Care Management Program. Participants were randomly allocated into derivation and validation sets at 2:1 ratio. Cox proportional hazards regression models were used to identify risk factors for dementia in the derivation set. Steps proposed by Framingham Heart Study were used to establish a prediction model with a scoring system. The average follow-up was 8.09 years, with a total of 853 incident dementia cases in derivation set. Dementia risk score summed up the individual scores (from 0 to 20). The areas under curve of 3-, 5-, and 10-year dementia risks were 0.82, 0.79, and 0.76 in derivation set and 0.84, 0.80, and 0.75 in validation set, respectively. The proposed score system is the first dementia risk prediction model for Chinese type 2 diabetes patients in Taiwan. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  17. Probabilistic Seismic Hazard Analysis for Yemen

    Directory of Open Access Journals (Sweden)

    Rakesh Mohindra

    2012-01-01

    Full Text Available A stochastic-event probabilistic seismic hazard model, which can be used further for estimates of seismic loss and seismic risk analysis, has been developed for the territory of Yemen. An updated composite earthquake catalogue has been compiled using the databases from two basic sources and several research publications. The spatial distribution of earthquakes from the catalogue was used to define and characterize the regional earthquake source zones for Yemen. To capture all possible scenarios in the seismic hazard model, a stochastic event set has been created consisting of 15,986 events generated from 1,583 fault segments in the delineated seismic source zones. Distribution of horizontal peak ground acceleration (PGA was calculated for all stochastic events considering epistemic uncertainty in ground-motion modeling using three suitable ground motion-prediction relationships, which were applied with equal weight. The probabilistic seismic hazard maps were created showing PGA and MSK seismic intensity at 10% and 50% probability of exceedance in 50 years, considering local soil site conditions. The resulting PGA for 10% probability of exceedance in 50 years (return period 475 years ranges from 0.2 g to 0.3 g in western Yemen and generally is less than 0.05 g across central and eastern Yemen. The largest contributors to Yemen’s seismic hazard are the events from the West Arabian Shield seismic zone.

  18. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  19. Measurements and models for hazardous chemical and mixed wastes. 1998 annual progress report

    International Nuclear Information System (INIS)

    Holcomb, C.; Louie, B.; Mullins, M.E.; Outcalt, S.L.; Rogers, T.N.; Watts, L.

    1998-01-01

    'Aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the US. A large quantity of the waste generated by the US chemical process industry is waste water. In addition, the majority of the waste inventory at DoE sites previously used for nuclear weapons production is aqueous waste. Large quantities of additional aqueous waste are expected to be generated during the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical property information is paramount. This knowledge will lead to huge savings by aiding in the design and optimization of treatment and disposal processes. The main objectives of this project are: Develop and validate models that accurately predict the phase equilibria and thermodynamic properties of hazardous aqueous systems necessary for the safe handling and successful design of separation and treatment processes for hazardous chemical and mixed wastes. Accurately measure the phase equilibria and thermodynamic properties of a representative system (water + acetone + isopropyl alcohol + sodium nitrate) over the applicable ranges of temperature, pressure, and composition to provide the pure component, binary, ternary, and quaternary experimental data required for model development. As of May, 1998, nine months into the first year of a three year project, the authors have made significant progress in the database development, have begun testing the models, and have been performance testing the apparatus on the pure components.'

  20. Operational and contractual impacts in E and P offshore during predicted natural hazards

    Energy Technology Data Exchange (ETDEWEB)

    Benevides, Paulo Roberto Correa de Sa e [PETROBRAS, Rio de Janeiro, RJ (Brazil)

    2008-07-01

    Generally, when E and P operators using DP (Dynamic Positioning) are advised previously of a possible natural hazard occurrence, usually they consider it like an emergency situation and their main action is oriented only to prepare the first response and use the 'force majeure' argumentation to protect itself from any additional responsibility. When the natural phenomenon actually happens, the expenses due to the losses will be accepted because it was already considered in its budget as 'Losses due to accident' and it will be shared by the partners of the project according to the correspondent contractual terms. This paper describes real cases of the evolution of predictions for natural hazards in offshore basins in Brazil, Western Africa and Gulf of Mexico where PETROBRAS and many other oil companies have used DP operations. It proposes some alternative procedures through the BCM (Business Continuity Management) to manage natural crisis instead of the common use of the traditional 'force majeure' argumentation. (author)

  1. Modelling human interactions in the assessment of man-made hazards

    International Nuclear Information System (INIS)

    Nitoi, M.; Farcasiu, M.; Apostol, M.

    2016-01-01

    The human reliability assessment tools are not currently capable to model adequately the human ability to adapt, to innovate and to manage under extreme situations. The paper presents the results obtained by ICN PSA team in the frame of FP7 Advanced Safety Assessment Methodologies: extended PSA (ASAMPSA_E) project regarding the investigation of conducting HRA in human-made hazards. The paper proposes to use a 4-steps methodology for the assessment of human interactions in the external events (Definition and modelling of human interactions; Quantification of human failure events; Recovery analysis; Review). The most relevant factors with respect to HRA for man-made hazards (response execution complexity; existence of procedures with respect to the scenario in question; time available for action; timing of cues; accessibility of equipment; harsh environmental conditions) are presented and discussed thoroughly. The challenges identified in relation to man-made hazards HRA are highlighted. (authors)

  2. A discriminant analysis prediction model of non-syndromic cleft lip with or without cleft palate based on risk factors.

    Science.gov (United States)

    Li, Huixia; Luo, Miyang; Luo, Jiayou; Zheng, Jianfei; Zeng, Rong; Du, Qiyun; Fang, Junqun; Ouyang, Na

    2016-11-23

    A risk prediction model of non-syndromic cleft lip with or without cleft palate (NSCL/P) was established by a discriminant analysis to predict the individual risk of NSCL/P in pregnant women. A hospital-based case-control study was conducted with 113 cases of NSCL/P and 226 controls without NSCL/P. The cases and the controls were obtained from 52 birth defects' surveillance hospitals in Hunan Province, China. A questionnaire was administered in person to collect the variables relevant to NSCL/P by face to face interviews. Logistic regression models were used to analyze the influencing factors of NSCL/P, and a stepwise Fisher discriminant analysis was subsequently used to construct the prediction model. In the univariate analysis, 13 influencing factors were related to NSCL/P, of which the following 8 influencing factors as predictors determined the discriminant prediction model: family income, maternal occupational hazards exposure, premarital medical examination, housing renovation, milk/soymilk intake in the first trimester of pregnancy, paternal occupational hazards exposure, paternal strong tea drinking, and family history of NSCL/P. The model had statistical significance (lambda = 0.772, chi-square = 86.044, df = 8, P Self-verification showed that 83.8 % of the participants were correctly predicted to be NSCL/P cases or controls with a sensitivity of 74.3 % and a specificity of 88.5 %. The area under the receiver operating characteristic curve (AUC) was 0.846. The prediction model that was established using the risk factors of NSCL/P can be useful for predicting the risk of NSCL/P. Further research is needed to improve the model, and confirm the validity and reliability of the model.

  3. Computer models used to support cleanup decision-making at hazardous and radioactive waste sites

    International Nuclear Information System (INIS)

    Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.

    1992-07-01

    Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study

  4. Computer models used to support cleanup decision-making at hazardous and radioactive waste sites

    Energy Technology Data Exchange (ETDEWEB)

    Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.

    1992-07-01

    Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.

  5. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  6. Reactivation hazard mapping for ancient landslides in West Belgium

    Directory of Open Access Journals (Sweden)

    O. Dewitte

    2006-01-01

    Full Text Available Several examples in western Europe have shown that, at least for deep-seated rotational slides, reactivation of formerly slipped masses is a more frequent phenomenon than the occurrence of new landslides, therefore representing a higher hazard. We selected a study area comprised of 13 landslides located in the Flemish Ardennes (West Belgium and predicted the hazard related to scarp retreat. The scarp reactivations were identified from the comparison of DTMs produced for 1952 and 1996. Robust results were obtained with the Gamma operator of a fuzzy set approach and a combination of geomorphic, topographic and land use data. We built first a prediction model from the relations linking the 1952–1996 retreat events to the conditioning parameters of 1952. The prediction rate of the resulting susceptibility map is estimated by a cross-validation procedure. We then applied the statistics of this model to the data of 1996 in order to produce a susceptibility map responding to the present-day conditions. Finally, we estimated the conditional probabilities of occurrence of future reactivations for the period 1996–2036.

  7. Flood hazard mapping of Palembang City by using 2D model

    Science.gov (United States)

    Farid, Mohammad; Marlina, Ayu; Kusuma, Muhammad Syahril Badri

    2017-11-01

    Palembang as the capital city of South Sumatera Province is one of the metropolitan cities in Indonesia that flooded almost every year. Flood in the city is highly related to Musi River Basin. Based on Indonesia National Agency of Disaster Management (BNPB), the level of flood hazard is high. Many natural factors caused flood in the city such as high intensity of rainfall, inadequate drainage capacity, and also backwater flow due to spring tide. Furthermore, anthropogenic factors such as population increase, land cover/use change, and garbage problem make flood problem become worse. The objective of this study is to develop flood hazard map of Palembang City by using two dimensional model. HEC-RAS 5.0 is used as modelling tool which is verified with field observation data. There are 21 sub catchments of Musi River Basin in the flood simulation. The level of flood hazard refers to Head Regulation of BNPB number 2 in 2012 regarding general guideline of disaster risk assessment. The result for 25 year return per iod of flood shows that with 112.47 km2 area of inundation, 14 sub catchments are categorized in high hazard level. It is expected that the hazard map can be used for risk assessment.

  8. Current Knowledge on the Use of Computational Toxicology in Hazard Assessment of Metallic Engineered Nanomaterials.

    Science.gov (United States)

    Chen, Guangchao; Peijnenburg, Willie; Xiao, Yinlong; Vijver, Martina G

    2017-07-12

    As listed by the European Chemicals Agency, the three elements in evaluating the hazards of engineered nanomaterials (ENMs) include the integration and evaluation of toxicity data, categorization and labeling of ENMs, and derivation of hazard threshold levels for human health and the environment. Assessing the hazards of ENMs solely based on laboratory tests is time-consuming, resource intensive, and constrained by ethical considerations. The adoption of computational toxicology into this task has recently become a priority. Alternative approaches such as (quantitative) structure-activity relationships ((Q)SAR) and read-across are of significant help in predicting nanotoxicity and filling data gaps, and in classifying the hazards of ENMs to individual species. Thereupon, the species sensitivity distribution (SSD) approach is able to serve the establishment of ENM hazard thresholds sufficiently protecting the ecosystem. This article critically reviews the current knowledge on the development of in silico models in predicting and classifying the hazard of metallic ENMs, and the development of SSDs for metallic ENMs. Further discussion includes the significance of well-curated experimental datasets and the interpretation of toxicity mechanisms of metallic ENMs based on reported models. An outlook is also given on future directions of research in this frontier.

  9. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    OpenAIRE

    Custer, Rocco; Nishijima, Kazuyoshi

    2012-01-01

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is ...

  10. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.

    Science.gov (United States)

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.

  11. Hazard function theory for nonstationary natural hazards

    Science.gov (United States)

    Read, L.; Vogel, R. M.

    2015-12-01

    Studies from the natural hazards literature indicate that many natural processes, including wind speeds, landslides, wildfires, precipitation, streamflow and earthquakes, show evidence of nonstationary behavior such as trends in magnitudes through time. Traditional probabilistic analysis of natural hazards based on partial duration series (PDS) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance is constant through time. Given evidence of trends and the consequent expected growth in devastating impacts from natural hazards across the world, new methods are needed to characterize their probabilistic behavior. The field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (x) with its failure time series (t), enabling computation of corresponding average return periods and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose PDS magnitudes are assumed to follow the widely applied Poisson-GP model. We derive a 2-parameter Generalized Pareto hazard model and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series x, with corresponding failure time series t, should have application to a wide class of natural hazards.

  12. The Prospect of using Three-Dimensional Earth Models To Improve Nuclear Explosion Monitoring and Ground Motion Hazard Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Antoun, T; Harris, D; Lay, T; Myers, S C; Pasyanos, M E; Richards, P; Rodgers, A J; Walter, W R; Zucca, J J

    2008-02-11

    The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes a path by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas.

  13. Ground motion models used in the 2014 U.S. National Seismic Hazard Maps

    Science.gov (United States)

    Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.

    2015-01-01

    The National Seismic Hazard Maps (NSHMs) are an important component of seismic design regulations in the United States. This paper compares hazard using the new suite of ground motion models (GMMs) relative to hazard using the suite of GMMs applied in the previous version of the maps. The new source characterization models are used for both cases. A previous paper (Rezaeian et al. 2014) discussed the five NGA-West2 GMMs used for shallow crustal earthquakes in the Western United States (WUS), which are also summarized here. Our focus in this paper is on GMMs for earthquakes in stable continental regions in the Central and Eastern United States (CEUS), as well as subduction interface and deep intraslab earthquakes. We consider building code hazard levels for peak ground acceleration (PGA), 0.2-s, and 1.0-s spectral accelerations (SAs) on uniform firm-rock site conditions. The GMM modifications in the updated version of the maps created changes in hazard within 5% to 20% in WUS; decreases within 5% to 20% in CEUS; changes within 5% to 15% for subduction interface earthquakes; and changes involving decreases of up to 50% and increases of up to 30% for deep intraslab earthquakes for most U.S. sites. These modifications were combined with changes resulting from modifications in the source characterization models to obtain the new hazard maps.

  14. Development and Validation of a Prediction Model to Estimate Individual Risk of Pancreatic Cancer.

    Science.gov (United States)

    Yu, Ami; Woo, Sang Myung; Joo, Jungnam; Yang, Hye-Ryung; Lee, Woo Jin; Park, Sang-Jae; Nam, Byung-Ho

    2016-01-01

    There is no reliable screening tool to identify people with high risk of developing pancreatic cancer even though pancreatic cancer represents the fifth-leading cause of cancer-related death in Korea. The goal of this study was to develop an individualized risk prediction model that can be used to screen for asymptomatic pancreatic cancer in Korean men and women. Gender-specific risk prediction models for pancreatic cancer were developed using the Cox proportional hazards model based on an 8-year follow-up of a cohort study of 1,289,933 men and 557,701 women in Korea who had biennial examinations in 1996-1997. The performance of the models was evaluated with respect to their discrimination and calibration ability based on the C-statistic and Hosmer-Lemeshow type χ2 statistic. A total of 1,634 (0.13%) men and 561 (0.10%) women were newly diagnosed with pancreatic cancer. Age, height, BMI, fasting glucose, urine glucose, smoking, and age at smoking initiation were included in the risk prediction model for men. Height, BMI, fasting glucose, urine glucose, smoking, and drinking habit were included in the risk prediction model for women. Smoking was the most significant risk factor for developing pancreatic cancer in both men and women. The risk prediction model exhibited good discrimination and calibration ability, and in external validation it had excellent prediction ability. Gender-specific risk prediction models for pancreatic cancer were developed and validated for the first time. The prediction models will be a useful tool for detecting high-risk individuals who may benefit from increased surveillance for pancreatic cancer.

  15. Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT

    International Nuclear Information System (INIS)

    Depeursinge, Adrien; Yanagawa, Masahiro; Leung, Ann N.; Rubin, Daniel L.

    2015-01-01

    Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10 −5 ). Conclusions: This study

  16. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    Science.gov (United States)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  17. A set of integrated environmental transport and diffusion models for calculating hazardous releases

    International Nuclear Information System (INIS)

    Pepper, D.W.

    1996-01-01

    A set of numerical transport and dispersion models is incorporated within a graphical interface shell to predict hazardous material released into the environment. The visual shell (EnviroView) consists of an object-oriented knowledge base, which is used for inventory control, site mapping and orientation, and monitoring of materials. Graphical displays of detailed sites, building locations, floor plans, and three-dimensional views within a room are available to the user using a point and click interface. In the event of a release to the environment, the user can choose from a selection of analytical, finite element, finite volume, and boundary element methods, which calculate atmospheric transport, groundwater transport, and dispersion within a building interior. The program runs on 486 personal computers under WINDOWS

  18. Time-aggregation effects on the baseline of continuous-time and discrete-time hazard models

    NARCIS (Netherlands)

    ter Hofstede, F.; Wedel, M.

    In this study we reinvestigate the effect of time-aggregation for discrete- and continuous-time hazard models. We reanalyze the results of a previous Monte Carlo study by ter Hofstede and Wedel (1998), in which the effects of time-aggregation on the parameter estimates of hazard models were

  19. Efficient pan-European river flood hazard modelling through a combination of statistical and physical models

    NARCIS (Netherlands)

    Paprotny, D.; Morales Napoles, O.; Jonkman, S.N.

    2017-01-01

    Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood

  20. Evaluation of seismic hazard at the northwestern part of Egypt

    Science.gov (United States)

    Ezzelarab, M.; Shokry, M. M. F.; Mohamed, A. M. E.; Helal, A. M. A.; Mohamed, Abuoelela A.; El-Hadidy, M. S.

    2016-01-01

    The objective of this study is to evaluate the seismic hazard at the northwestern Egypt using the probabilistic seismic hazard assessment approach. The Probabilistic approach was carried out based on a recent data set to take into account the historic seismicity and updated instrumental seismicity. A homogenous earthquake catalogue was compiled and a proposed seismic sources model was presented. The doubly-truncated exponential model was adopted for calculations of the recurrence parameters. Ground-motion prediction equations that recently recommended by experts and developed based upon earthquake data obtained from tectonic environments similar to those in and around the studied area were weighted and used for assessment of seismic hazard in the frame of logic tree approach. Considering a grid of 0.2° × 0.2° covering the study area, seismic hazard curves for every node were calculated. Hazard maps at bedrock conditions were produced for peak ground acceleration, in addition to six spectral periods (0.1, 0.2, 0.3, 1.0, 2.0 and 3.0 s) for return periods of 72, 475 and 2475 years. The unified hazard spectra of two selected rock sites at Alexandria and Mersa Matruh Cities were provided. Finally, the hazard curves were de-aggregated to determine the sources that contribute most of hazard level of 10% probability of exceedance in 50 years for the mentioned selected sites.

  1. Assessing the ability of operational snow models to predict snowmelt runoff extremes (Invited)

    Science.gov (United States)

    Wood, A. W.; Restrepo, P. J.; Clark, M. P.

    2013-12-01

    In the western US, the snow accumulation and melt cycle of winter and spring plays a critical role in the region's water management strategies. Consequently, the ability to predict snowmelt runoff at time scales from days to seasons is a key input for decisions in reservoir management, whether for avoiding flood hazards or supporting environmental flows through the scheduling of releases in spring, or for allocating releases for multi-state water distribution in dry seasons of year (using reservoir systems to provide an invaluable buffer for many sectors against drought). Runoff forecasts thus have important benefits at both wet and dry extremes of the climatological spectrum. The importance of the prediction of the snow cycle motivates an assessment of the strengths and weaknesses of the US's central operational snow model, SNOW17, in contrast to process-modeling alternatives, as they relate to simulating observed snowmelt variability and extremes. To this end, we use a flexible modeling approach that enables an investigation of different choices in model structure, including model physics, parameterization and degree of spatiotemporal discretization. We draw from examples of recent extreme events in western US watersheds and an overall assessment of retrospective model performance to identify fruitful avenues for advancing the modeling basis for the operational prediction of snow-related runoff extremes.

  2. Taxonomic analysis of perceived risk: modeling individual and group perceptions within homogeneous hazard domains

    International Nuclear Information System (INIS)

    Kraus, N.N.; Slovic, P.

    1988-01-01

    Previous studies of risk perception have typically focused on the mean judgments of a group of people regarding the riskiness (or safety) of a diverse set of hazardous activities, substances, and technologies. This paper reports the results of two studies that take a different path. Study 1 investigated whether models within a single technological domain were similar to previous models based on group means and diverse hazards. Study 2 created a group taxonomy of perceived risk for only one technological domain, railroads, and examined whether the structure of that taxonomy corresponded with taxonomies derived from prior studies of diverse hazards. Results from Study 1 indicated that the importance of various risk characteristics in determining perceived risk differed across individuals and across hazards, but not so much as to invalidate the results of earlier studies based on group means and diverse hazards. In Study 2, the detailed analysis of railroad hazards produced a structure that had both important similarities to, and dissimilarities from, the structure obtained in prior research with diverse hazard domains. The data also indicated that railroad hazards are really quite diverse, with some approaching nuclear reactors in their perceived seriousness. These results suggest that information about the diversity of perceptions within a single domain of hazards could provide valuable input to risk-management decisions

  3. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest.

    Science.gov (United States)

    Suh, Jangwon; Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon

    2017-11-27

    In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further.

  4. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest

    Science.gov (United States)

    Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon

    2017-01-01

    In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further. PMID:29186922

  5. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest

    Directory of Open Access Journals (Sweden)

    Jangwon Suh

    2017-11-01

    Full Text Available In this study, current geographic information system (GIS-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further.

  6. Issues in testing the new national seismic hazard model for Italy

    Science.gov (United States)

    Stein, S.; Peresan, A.; Kossobokov, V. G.; Brooks, E. M.; Spencer, B. D.

    2016-12-01

    It is important to bear in mind that we know little about how earthquake hazard maps actually describe the shaking that will actually occur in the future, and have no agreed way of assessing how well a map performed in the past, and, thus, whether one map performs better than another. Moreover, we should not forget that different maps can be useful for different end users, who may have different cost-and-benefit strategies. Thus, regardless of the specific tests we chose to use, the adopted testing approach should have several key features: We should assess map performance using all the available instrumental, paleo seismology, and historical intensity data. Instrumental data alone span a period much too short to capture the largest earthquakes - and thus strongest shaking - expected from most faults. We should investigate what causes systematic misfit, if any, between the longest record we have - historical intensity data available for the Italian territory from 217 B.C. to 2002 A.D. - and a given hazard map. We should compare how seismic hazard maps developed over time. How do the most recent maps for Italy compare to earlier ones? It is important to understand local divergences that show how the models are developing to the most recent one. The temporal succession of maps is important: we have to learn from previous errors. We should use the many different tests that have been proposed. All are worth trying, because different metrics of performance show different aspects of how a hazard map performs and can be used. We should compare other maps to the ones we are testing. Maps can be made using a wide variety of assumptions, which will lead to different predicted shaking. It is possible that maps derived by other approaches may perform better. Although Italian current codes are based on probabilistic maps, it is important from both a scientific and societal perspective to look at all options including deterministic scenario based ones. Comparing what works

  7. Current Knowledge on the Use of Computational Toxicology in Hazard Assessment of Metallic Engineered Nanomaterials

    Directory of Open Access Journals (Sweden)

    Guangchao Chen

    2017-07-01

    Full Text Available As listed by the European Chemicals Agency, the three elements in evaluating the hazards of engineered nanomaterials (ENMs include the integration and evaluation of toxicity data, categorization and labeling of ENMs, and derivation of hazard threshold levels for human health and the environment. Assessing the hazards of ENMs solely based on laboratory tests is time-consuming, resource intensive, and constrained by ethical considerations. The adoption of computational toxicology into this task has recently become a priority. Alternative approaches such as (quantitative structure–activity relationships ((QSAR and read-across are of significant help in predicting nanotoxicity and filling data gaps, and in classifying the hazards of ENMs to individual species. Thereupon, the species sensitivity distribution (SSD approach is able to serve the establishment of ENM hazard thresholds sufficiently protecting the ecosystem. This article critically reviews the current knowledge on the development of in silico models in predicting and classifying the hazard of metallic ENMs, and the development of SSDs for metallic ENMs. Further discussion includes the significance of well-curated experimental datasets and the interpretation of toxicity mechanisms of metallic ENMs based on reported models. An outlook is also given on future directions of research in this frontier.

  8. Uncertainty Analysis and Expert Judgment in Seismic Hazard Analysis

    Science.gov (United States)

    Klügel, Jens-Uwe

    2011-01-01

    The large uncertainty associated with the prediction of future earthquakes is usually regarded as the main reason for increased hazard estimates which have resulted from some recent large scale probabilistic seismic hazard analysis studies (e.g. the PEGASOS study in Switzerland and the Yucca Mountain study in the USA). It is frequently overlooked that such increased hazard estimates are characteristic for a single specific method of probabilistic seismic hazard analysis (PSHA): the traditional (Cornell-McGuire) PSHA method which has found its highest level of sophistication in the SSHAC probability method. Based on a review of the SSHAC probability model and its application in the PEGASOS project, it is shown that the surprising results of recent PSHA studies can be explained to a large extent by the uncertainty model used in traditional PSHA, which deviates from the state of the art in mathematics and risk analysis. This uncertainty model, the Ang-Tang uncertainty model, mixes concepts of decision theory with probabilistic hazard assessment methods leading to an overestimation of uncertainty in comparison to empirical evidence. Although expert knowledge can be a valuable source of scientific information, its incorporation into the SSHAC probability method does not resolve the issue of inflating uncertainties in PSHA results. Other, more data driven, PSHA approaches in use in some European countries are less vulnerable to this effect. The most valuable alternative to traditional PSHA is the direct probabilistic scenario-based approach, which is closely linked with emerging neo-deterministic methods based on waveform modelling.

  9. Flood hazards for nuclear power plants

    International Nuclear Information System (INIS)

    Yen, B.C.

    1988-01-01

    Flooding hazards for nuclear power plants may be caused by various external geophysical events. In this paper the hydrologic hazards from flash floods, river floods and heavy rain at the plant site are considered. Depending on the mode of analysis, two types of hazard evaluation are identified: 1) design hazard which is the probability of flooding over an expected service period, and 2) operational hazard which deals with real-time forecasting of the probability of flooding of an incoming event. Hazard evaluation techniques using flood frequency analysis can only be used for type 1) design hazard. Evaluation techniques using rainfall-runoff simulation or multi-station correlation can be used for both types of hazard prediction. (orig.)

  10. [Hazard evaluation modeling of particulate matters emitted by coal-fired boilers and case analysis].

    Science.gov (United States)

    Shi, Yan-Ting; Du, Qian; Gao, Jian-Min; Bian, Xin; Wang, Zhi-Pu; Dong, He-Ming; Han, Qiang; Cao, Yang

    2014-02-01

    In order to evaluate the hazard of PM2.5 emitted by various boilers, in this paper, segmentation of particulate matters with sizes of below 2. 5 microm was performed based on their formation mechanisms and hazard level to human beings and environment. Meanwhile, taking into account the mass concentration, number concentration, enrichment factor of Hg, and content of Hg element in different coal ashes, a comprehensive model aimed at evaluating hazard of PM2.5 emitted by coal-fired boilers was established in this paper. Finally, through utilizing filed experimental data of previous literatures, a case analysis of the evaluation model was conducted, and the concept of hazard reduction coefficient was proposed, which can be used to evaluate the performance of dust removers.

  11. A Mathematical Model for the Industrial Hazardous Waste Location-Routing Problem

    Directory of Open Access Journals (Sweden)

    Omid Boyer

    2013-01-01

    Full Text Available Technology progress is a cause of industrial hazardous wastes increasing in the whole world . Management of hazardous waste is a significant issue due to the imposed risk on environment and human life. This risk can be a result of location of undesirable facilities and also routing hazardous waste. In this paper a biobjective mixed integer programing model for location-routing industrial hazardous waste with two objectives is developed. First objective is total cost minimization including transportation cost, operation cost, initial investment cost, and cost saving from selling recycled waste. Second objective is minimization of transportation risk. Risk of population exposure within bandwidth along route is used to measure transportation risk. This model can help decision makers to locate treatment, recycling, and disposal centers simultaneously and also to route waste between these facilities considering risk and cost criteria. The results of the solved problem prove conflict between two objectives. Hence, it is possible to decrease the cost value by marginally increasing the transportation risk value and vice versa. A weighted sum method is utilized to combine two objectives function into one objective function. To solve the problem GAMS software with CPLEX solver is used. The problem is applied in Markazi province in Iran.

  12. Introducing Meta-models for a More Efficient Hazard Mitigation Strategy with Rockfall Protection Barriers

    Science.gov (United States)

    Toe, David; Mentani, Alessio; Govoni, Laura; Bourrier, Franck; Gottardi, Guido; Lambert, Stéphane

    2018-04-01

    The paper presents a new approach to assess the effecctiveness of rockfall protection barriers, accounting for the wide variety of impact conditions observed on natural sites. This approach makes use of meta-models, considering a widely used rockfall barrier type and was developed from on FE simulation results. Six input parameters relevant to the block impact conditions have been considered. Two meta-models were developed concerning the barrier capability either of stopping the block or in reducing its kinetic energy. The outcome of the parameters range on the meta-model accuracy has been also investigated. The results of the study reveal that the meta-models are effective in reproducing with accuracy the response of the barrier to any impact conditions, providing a formidable tool to support the design of these structures. Furthermore, allowing to accommodate the effects of the impact conditions on the prediction of the block-barrier interaction, the approach can be successfully used in combination with rockfall trajectory simulation tools to improve rockfall quantitative hazard assessment and optimise rockfall mitigation strategies.

  13. Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources

    Science.gov (United States)

    Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.

    2017-09-01

    We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction

  14. Computational Approaches to Chemical Hazard Assessment

    Science.gov (United States)

    Luechtefeld, Thomas; Hartung, Thomas

    2018-01-01

    Summary Computational prediction of toxicity has reached new heights as a result of decades of growth in the magnitude and diversity of biological data. Public packages for statistics and machine learning make model creation faster. New theory in machine learning and cheminformatics enables integration of chemical structure, toxicogenomics, simulated and physical data in the prediction of chemical health hazards, and other toxicological information. Our earlier publications have characterized a toxicological dataset of unprecedented scale resulting from the European REACH legislation (Registration Evaluation Authorisation and Restriction of Chemicals). These publications dove into potential use cases for regulatory data and some models for exploiting this data. This article analyzes the options for the identification and categorization of chemicals, moves on to the derivation of descriptive features for chemicals, discusses different kinds of targets modeled in computational toxicology, and ends with a high-level perspective of the algorithms used to create computational toxicology models. PMID:29101769

  15. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  16. Modeling hydrologic and geomorphic hazards across post-fire landscapes using a self-organizing map approach

    Science.gov (United States)

    Friedel, Michael J.

    2011-01-01

    Few studies attempt to model the range of possible post-fire hydrologic and geomorphic hazards because of the sparseness of data and the coupled, nonlinear, spatial, and temporal relationships among landscape variables. In this study, a type of unsupervised artificial neural network, called a self-organized map (SOM), is trained using data from 540 burned basins in the western United States. The sparsely populated data set includes variables from independent numerical landscape categories (climate, land surface form, geologic texture, and post-fire condition), independent landscape classes (bedrock geology and state), and dependent initiation processes (runoff, landslide, and runoff and landslide combination) and responses (debris flows, floods, and no events). Pattern analysis of the SOM-based component planes is used to identify and interpret relations among the variables. Application of the Davies-Bouldin criteria following k-means clustering of the SOM neurons identified eight conceptual regional models for focusing future research and empirical model development. A split-sample validation on 60 independent basins (not included in the training) indicates that simultaneous predictions of initiation process and response types are at least 78% accurate. As climate shifts from wet to dry conditions, forecasts across the burned landscape reveal a decreasing trend in the total number of debris flow, flood, and runoff events with considerable variability among individual basins. These findings suggest the SOM may be useful in forecasting real-time post-fire hazards, and long-term post-recovery processes and effects of climate change scenarios.

  17. External validation of models predicting the individual risk of metachronous peritoneal carcinomatosis from colon and rectal cancer.

    Science.gov (United States)

    Segelman, J; Akre, O; Gustafsson, U O; Bottai, M; Martling, A

    2016-04-01

    To externally validate previously published predictive models of the risk of developing metachronous peritoneal carcinomatosis (PC) after resection of nonmetastatic colon or rectal cancer and to update the predictive model for colon cancer by adding new prognostic predictors. Data from all patients with Stage I-III colorectal cancer identified from a population-based database in Stockholm between 2008 and 2010 were used. We assessed the concordance between the predicted and observed probabilities of PC and utilized proportional-hazard regression to update the predictive model for colon cancer. When applied to the new validation dataset (n = 2011), the colon and rectal cancer risk-score models predicted metachronous PC with a concordance index of 79% and 67%, respectively. After adding the subclasses of pT3 and pT4 stage and mucinous tumour to the colon cancer model, the concordance index increased to 82%. In validation of external and recent cohorts, the predictive accuracy was strong in colon cancer and moderate in rectal cancer patients. The model can be used to identify high-risk patients for planned second-look laparoscopy/laparotomy for possible subsequent cytoreductive surgery and hyperthermic intraperitoneal chemotherapy. Colorectal Disease © 2015 The Association of Coloproctology of Great Britain and Ireland.

  18. A new Predictive Model for Relativistic Electrons in Outer Radiation Belt

    Science.gov (United States)

    Chen, Y.

    2017-12-01

    Relativistic electrons trapped in the Earth's outer radiation belt present a highly hazardous radiation environment for spaceborne electronics. These energetic electrons, with kinetic energies up to several megaelectron-volt (MeV), manifest a highly dynamic and event-specific nature due to the delicate interplay of competing transport, acceleration and loss processes. Therefore, developing a forecasting capability for outer belt MeV electrons has long been a critical and challenging task for the space weather community. Recently, the vital roles of electron resonance with waves (including such as chorus and electromagnetic ion cyclotron) have been widely recognized; however, it is still difficult for current diffusion radiation belt models to reproduce the behavior of MeV electrons during individual geomagnetic storms, mainly because of the large uncertainties existing in input parameters. In this work, we expanded our previous cross-energy cross-pitch-angle coherence study and developed a new predictive model for MeV electrons over a wide range of L-shells inside the outer radiation belt. This new model uses NOAA POES observations from low-Earth-orbits (LEOs) as inputs to provide high-fidelity nowcast (multiple hour prediction) and forecast (> 1 day prediction) of the energization of MeV electrons as well as the evolving MeV electron distributions afterwards during storms. Performance of the predictive model is quantified by long-term in situ data from Van Allen Probes and LANL GEO satellites. This study adds new science significance to an existing LEO space infrastructure, and provides reliable and powerful tools to the whole space community.

  19. Model for the prediction of subsurface strata movement due to underground mining

    Science.gov (United States)

    Cheng, Jianwei; Liu, Fangyuan; Li, Siyuan

    2017-12-01

    The problem of ground control stability due to large underground mining operations is often associated with large movements and deformations of strata. It is a complicated problem, and can induce severe safety or environmental hazards either at the surface or in strata. Hence, knowing the subsurface strata movement characteristics, and making any subsidence predictions in advance, are desirable for mining engineers to estimate any damage likely to affect the ground surface or subsurface strata. Based on previous research findings, this paper broadly applies a surface subsidence prediction model based on the influence function method to subsurface strata, in order to predict subsurface stratum movement. A step-wise prediction model is proposed, to investigate the movement of underground strata. The model involves a dynamic iteration calculation process to derive the movements and deformations for each stratum layer; modifications to the influence method function are also made for more precise calculations. The critical subsidence parameters, incorporating stratum mechanical properties and the spatial relationship of interest at the mining level, are thoroughly considered, with the purpose of improving the reliability of input parameters. Such research efforts can be very helpful to mining engineers’ understanding of the moving behavior of all strata over underground excavations, and assist in making any damage mitigation plan. In order to check the reliability of the model, two methods are carried out and cross-validation applied. One is to use a borehole TV monitor recording to identify the progress of subsurface stratum bedding and caving in a coal mine, the other is to conduct physical modelling of the subsidence in underground strata. The results of these two methods are used to compare with theoretical results calculated by the proposed mathematical model. The testing results agree well with each other, and the acceptable accuracy and reliability of the

  20. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Hujun He

    2017-01-01

    Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.

  1. Generating a robust prediction model for stage I lung adenocarcinoma recurrence after surgical resection.

    Science.gov (United States)

    Wu, Yu-Chung; Wei, Nien-Chih; Hung, Jung-Jyh; Yeh, Yi-Chen; Su, Li-Jen; Hsu, Wen-Hu; Chou, Teh-Ying

    2017-10-03

    Lung cancer mortality remains high even after successful resection. Adjuvant treatment benefits stage II and III patients, but not stage I patients, and most studies fail to predict recurrence in stage I patients. Our study included 211 lung adenocarcinoma patients (stages I-IIIA; 81% stage I) who received curative resections at Taipei Veterans General Hospital between January 2001 and December 2012. We generated a prediction model using 153 samples, with validation using an additional 58 clinical outcome-blinded samples. Gene expression profiles were generated using formalin-fixed, paraffin-embedded tissue samples and microarrays. Data analysis was performed using a supervised clustering method. The prediction model generated from mixed stage samples successfully separated patients at high vs. low risk for recurrence. The validation tests hazard ratio (HR = 4.38) was similar to that of the training tests (HR = 4.53), indicating a robust training process. Our prediction model successfully distinguished high- from low-risk stage IA and IB patients, with a difference in 5-year disease-free survival between high- and low-risk patients of 42% for stage IA and 45% for stage IB ( p model for identifying lung adenocarcinoma patients at high risk for recurrence who may benefit from adjuvant therapy. Our prediction performance of the difference in disease free survival between high risk and low risk groups demonstrates more than two fold improvement over earlier published results.

  2. Complete hazard ranking to analyze right-censored data: An ALS survival study.

    Science.gov (United States)

    Huang, Zhengnan; Zhang, Hongjiu; Boss, Jonathan; Goutman, Stephen A; Mukherjee, Bhramar; Dinov, Ivo D; Guan, Yuanfang

    2017-12-01

    Survival analysis represents an important outcome measure in clinical research and clinical trials; further, survival ranking may offer additional advantages in clinical trials. In this study, we developed GuanRank, a non-parametric ranking-based technique to transform patients' survival data into a linear space of hazard ranks. The transformation enables the utilization of machine learning base-learners including Gaussian process regression, Lasso, and random forest on survival data. The method was submitted to the DREAM Amyotrophic Lateral Sclerosis (ALS) Stratification Challenge. Ranked first place, the model gave more accurate ranking predictions on the PRO-ACT ALS dataset in comparison to Cox proportional hazard model. By utilizing right-censored data in its training process, the method demonstrated its state-of-the-art predictive power in ALS survival ranking. Its feature selection identified multiple important factors, some of which conflicts with previous studies.

  3. Trimming the UCERF2 hazard logic tree

    Science.gov (United States)

    Porter, Keith A.; Field, Edward H.; Milner, Kevin

    2012-01-01

    The Uniform California Earthquake Rupture Forecast 2 (UCERF2) is a fully time‐dependent earthquake rupture forecast developed with sponsorship of the California Earthquake Authority (Working Group on California Earthquake Probabilities [WGCEP], 2007; Field et al., 2009). UCERF2 contains 480 logic‐tree branches reflecting choices among nine modeling uncertainties in the earthquake rate model shown in Figure 1. For seismic hazard analysis, it is also necessary to choose a ground‐motion‐prediction equation (GMPE) and set its parameters. Choosing among four next‐generation attenuation (NGA) relationships results in a total of 1920 hazard calculations per site. The present work is motivated by a desire to reduce the computational effort involved in a hazard analysis without understating uncertainty. We set out to assess which branching points of the UCERF2 logic tree contribute most to overall uncertainty, and which might be safely ignored (set to only one branch) without significantly biasing results or affecting some useful measure of uncertainty. The trimmed logic tree will have all of the original choices from the branching points that contribute significantly to uncertainty, but only one arbitrarily selected choice from the branching points that do not.

  4. Interconnected ponds operation for flood hazard distribution

    Science.gov (United States)

    Putra, S. S.; Ridwan, B. W.

    2016-05-01

    The climatic anomaly, which comes with extreme rainfall, will increase the flood hazard in an area within a short period of time. The river capacity in discharging the flood is not continuous along the river stretch and sensitive to the flood peak. This paper contains the alternatives on how to locate the flood retention pond that are physically feasible to reduce the flood peak. The flood ponds were designed based on flood curve number criteria (TR-55, USDA) with the aim of rapid flood peak capturing and gradual flood retuning back to the river. As a case study, the hydrologic condition of upper Ciliwung river basin with several presumed flood pond locations was conceptually designed. A fundamental tank model that reproducing the operation of interconnected ponds was elaborated to achieve the designed flood discharge that will flows to the downstream area. The flood hazard distribution status, as the model performance criteria, will be computed within Ciliwung river reach in Manggarai Sluice Gate spot. The predicted hazard reduction with the operation of the interconnected retention area result had been bench marked with the normal flow condition.

  5. Assessment of liquefaction-induced hazards using Bayesian networks based on standard penetration test data

    Science.gov (United States)

    Tang, Xiao-Wei; Bai, Xu; Hu, Ji-Lei; Qiu, Jiang-Nan

    2018-05-01

    Liquefaction-induced hazards such as sand boils, ground cracks, settlement, and lateral spreading are responsible for considerable damage to engineering structures during major earthquakes. Presently, there is no effective empirical approach that can assess different liquefaction-induced hazards in one model. This is because of the uncertainties and complexity of the factors related to seismic liquefaction and liquefaction-induced hazards. In this study, Bayesian networks (BNs) are used to integrate multiple factors related to seismic liquefaction, sand boils, ground cracks, settlement, and lateral spreading into a model based on standard penetration test data. The constructed BN model can assess four different liquefaction-induced hazards together. In a case study, the BN method outperforms an artificial neural network and Ishihara and Yoshimine's simplified method in terms of accuracy, Brier score, recall, precision, and area under the curve (AUC) of the receiver operating characteristic (ROC). This demonstrates that the BN method is a good alternative tool for the risk assessment of liquefaction-induced hazards. Furthermore, the performance of the BN model in estimating liquefaction-induced hazards in Japan's 2011 Tōhoku earthquake confirms its correctness and reliability compared with the liquefaction potential index approach. The proposed BN model can also predict whether the soil becomes liquefied after an earthquake and can deduce the chain reaction process of liquefaction-induced hazards and perform backward reasoning. The assessment results from the proposed model provide informative guidelines for decision-makers to detect the damage state of a field following liquefaction.

  6. Guidance Index for Shallow Landslide Hazard Analysis

    Directory of Open Access Journals (Sweden)

    Cheila Avalon Cullen

    2016-10-01

    Full Text Available Rainfall-induced shallow landslides are one of the most frequent hazards on slanted terrains. Intense storms with high-intensity and long-duration rainfall have high potential to trigger rapidly moving soil masses due to changes in pore water pressure and seepage forces. Nevertheless, regardless of the intensity and/or duration of the rainfall, shallow landslides are influenced by antecedent soil moisture conditions. As of this day, no system exists that dynamically interrelates these two factors on large scales. This work introduces a Shallow Landslide Index (SLI as the first implementation of antecedent soil moisture conditions for the hazard analysis of shallow rainfall-induced landslides. The proposed mathematical algorithm is built using a logistic regression method that systematically learns from a comprehensive landslide inventory. Initially, root-soil moisture and rainfall measurements modeled from AMSR-E and TRMM respectively, are used as proxies to develop the index. The input dataset is randomly divided into training and verification sets using the Hold-Out method. Validation results indicate that the best-fit model predicts the highest number of cases correctly at 93.2% accuracy. Consecutively, as AMSR-E and TRMM stopped working in October 2011 and April 2015 respectively, root-soil moisture and rainfall measurements modeled by SMAP and GPM are used to develop models that calculate the SLI for 10, 7, and 3 days. The resulting models indicate a strong relationship (78.7%, 79.6%, and 76.8% respectively between the predictors and the predicted value. The results also highlight important remaining challenges such as adequate information for algorithm functionality and satellite based data reliability. Nevertheless, the experimental system can potentially be used as a dynamic indicator of the total amount of antecedent moisture and rainfall (for a given duration of time needed to trigger a shallow landslide in a susceptible area. It is

  7. Report 3: Guidance document on practices to model and implement Extreme Weather hazards in extended PSA

    International Nuclear Information System (INIS)

    Alzbutas, R.; Ostapchuk, S.; Borysiewicz, M.; Decker, K.; Kumar, Manorma; Haeggstroem, A.; Nitoi, M.; Groudev, P.; Parey, S.; Potempski, S.; Raimond, E.; Siklossy, T.

    2016-01-01

    The goal of this report is to provide guidance on practices to model Extreme Weather hazards and implement them in extended level 1 PSA. This report is a joint deliverable of work package 21 (WP21) and work package 22 (WP22). The general objective of WP21 is to provide guidance on all of the individual hazards selected at the End Users Workshop. This guidance is focusing on extreme weather hazards, namely: extreme wind, extreme temperature and snow pack. Other hazards, however, are considered in cases where they are correlated/ associated with the hazard under discussion. Guidance developed refers to existing guidance whenever possible. As it was recommended by end users this guidance covers questions of developing integrated and/or separated extreme weathers PSA models. (authors)

  8. Predictive modeling studies for the ecotoxicity of ionic liquids towards the green algae Scenedesmus vacuolatus.

    Science.gov (United States)

    Das, Rudra Narayan; Roy, Kunal

    2014-06-01

    Hazardous potential of ionic liquids is becoming an issue of high concern with increasing application of these compounds in various industrial processes. Predictive toxicological modeling on ionic liquids provides a rational assessment strategy and aids in developing suitable guidance for designing novel analogues. The present study attempts to explore the chemical features of ionic liquids responsible for their ecotoxicity towards the green algae Scenedesmus vacuolatus by developing mathematical models using extended topochemical atom (ETA) indices along with other categories of chemical descriptors. The entire study has been conducted with reference to the OECD guidelines for QSAR model development using predictive classification and regression modeling strategies. The best models from both the analyses showed that ecotoxicity of ionic liquids can be decreased by reducing chain length of cationic substituents and increasing hydrogen bond donor feature in cations, and replacing bulky unsaturated anions with simple saturated moiety having less lipophilic heteroatoms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    International Nuclear Information System (INIS)

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.

    2014-01-01

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading

  10. Intra- and interspecies gene expression models for predicting drug response in canine osteosarcoma.

    Science.gov (United States)

    Fowles, Jared S; Brown, Kristen C; Hess, Ann M; Duval, Dawn L; Gustafson, Daniel L

    2016-02-19

    Genomics-based predictors of drug response have the potential to improve outcomes associated with cancer therapy. Osteosarcoma (OS), the most common primary bone cancer in dogs, is commonly treated with adjuvant doxorubicin or carboplatin following amputation of the affected limb. We evaluated the use of gene-expression based models built in an intra- or interspecies manner to predict chemosensitivity and treatment outcome in canine OS. Models were built and evaluated using microarray gene expression and drug sensitivity data from human and canine cancer cell lines, and canine OS tumor datasets. The "COXEN" method was utilized to filter gene signatures between human and dog datasets based on strong co-expression patterns. Models were built using linear discriminant analysis via the misclassification penalized posterior algorithm. The best doxorubicin model involved genes identified in human lines that were co-expressed and trained on canine OS tumor data, which accurately predicted clinical outcome in 73 % of dogs (p = 0.0262, binomial). The best carboplatin model utilized canine lines for gene identification and model training, with canine OS tumor data for co-expression. Dogs whose treatment matched our predictions had significantly better clinical outcomes than those that didn't (p = 0.0006, Log Rank), and this predictor significantly associated with longer disease free intervals in a Cox multivariate analysis (hazard ratio = 0.3102, p = 0.0124). Our data show that intra- and interspecies gene expression models can successfully predict response in canine OS, which may improve outcome in dogs and serve as pre-clinical validation for similar methods in human cancer research.

  11. Recent Progress in Understanding Natural-Hazards-Generated TEC Perturbations: Measurements and Modeling Results

    Science.gov (United States)

    Komjathy, A.; Yang, Y. M.; Meng, X.; Verkhoglyadova, O. P.; Mannucci, A. J.; Langley, R. B.

    2015-12-01

    Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface.We will show examples for early detection of natural hazards generated ionospheric signatures using ground-based and space-borne GPS receivers. We will also discuss recent results from the U.S. Real-time Earthquake Analysis for Disaster Mitigation Network (READI) exercises utilizing our algorithms. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage. It is also expected that ionospheric monitoring of TEC perturbations might become an integral part of existing natural hazards warning systems.

  12. Defaultable Game Options in a Hazard Process Model

    Directory of Open Access Journals (Sweden)

    Tomasz R. Bielecki

    2009-01-01

    Full Text Available The valuation and hedging of defaultable game options is studied in a hazard process model of credit risk. A convenient pricing formula with respect to a reference filteration is derived. A connection of arbitrage prices with a suitable notion of hedging is obtained. The main result shows that the arbitrage prices are the minimal superhedging prices with sigma martingale cost under a risk neutral measure.

  13. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  14. Flood hazard zoning in Yasooj region, Iran, using GIS and multi-criteria decision analysis

    Directory of Open Access Journals (Sweden)

    Omid Rahmati

    2016-05-01

    Full Text Available Flood is considered to be the most common natural disaster worldwide during the last decades. Flood hazard potential mapping is required for management and mitigation of flood. The present research was aimed to assess the efficiency of analytical hierarchical process (AHP to identify potential flood hazard zones by comparing with the results of a hydraulic model. Initially, four parameters via distance to river, land use, elevation and land slope were used in some part of the Yasooj River, Iran. In order to determine the weight of each effective factor, questionnaires of comparison ratings on the Saaty's scale were prepared and distributed to eight experts. The normalized weights of criteria/parameters were determined based on Saaty's nine-point scale and its importance in specifying flood hazard potential zones using the AHP and eigenvector methods. The set of criteria were integrated by weighted linear combination method using ArcGIS 10.2 software to generate flood hazard prediction map. The inundation simulation (extent and depth of flood was conducted using hydrodynamic program HEC-RAS for 50- and 100-year interval floods. The validation of the flood hazard prediction map was conducted based on flood extent and depth maps. The results showed that the AHP technique is promising of making accurate and reliable prediction for flood extent. Therefore, the AHP and geographic information system (GIS techniques are suggested for assessment of the flood hazard potential, specifically in no-data regions.

  15. Predictive modeling of hazardous waste landfill total above-ground biomass using passive optical and LIDAR remotely sensed data

    Science.gov (United States)

    Hadley, Brian Christopher

    This dissertation assessed remotely sensed data and geospatial modeling technique(s) to map the spatial distribution of total above-ground biomass present on the surface of the Savannah River National Laboratory's (SRNL) Mixed Waste Management Facility (MWMF) hazardous waste landfill. Ordinary least squares (OLS) regression, regression kriging, and tree-structured regression were employed to model the empirical relationship between in-situ measured Bahia (Paspalum notatum Flugge) and Centipede [Eremochloa ophiuroides (Munro) Hack.] grass biomass against an assortment of explanatory variables extracted from fine spatial resolution passive optical and LIDAR remotely sensed data. Explanatory variables included: (1) discrete channels of visible, near-infrared (NIR), and short-wave infrared (SWIR) reflectance, (2) spectral vegetation indices (SVI), (3) spectral mixture analysis (SMA) modeled fractions, (4) narrow-band derivative-based vegetation indices, and (5) LIDAR derived topographic variables (i.e. elevation, slope, and aspect). Results showed that a linear combination of the first- (1DZ_DGVI), second- (2DZ_DGVI), and third-derivative of green vegetation indices (3DZ_DGVI) calculated from hyperspectral data recorded over the 400--960 nm wavelengths of the electromagnetic spectrum explained the largest percentage of statistical variation (R2 = 0.5184) in the total above-ground biomass measurements. In general, the topographic variables did not correlate well with the MWMF biomass data, accounting for less than five percent of the statistical variation. It was concluded that tree-structured regression represented the optimum geospatial modeling technique due to a combination of model performance and efficiency/flexibility factors.

  16. Complete hazard ranking to analyze right-censored data: An ALS survival study.

    Directory of Open Access Journals (Sweden)

    Zhengnan Huang

    2017-12-01

    Full Text Available Survival analysis represents an important outcome measure in clinical research and clinical trials; further, survival ranking may offer additional advantages in clinical trials. In this study, we developed GuanRank, a non-parametric ranking-based technique to transform patients' survival data into a linear space of hazard ranks. The transformation enables the utilization of machine learning base-learners including Gaussian process regression, Lasso, and random forest on survival data. The method was submitted to the DREAM Amyotrophic Lateral Sclerosis (ALS Stratification Challenge. Ranked first place, the model gave more accurate ranking predictions on the PRO-ACT ALS dataset in comparison to Cox proportional hazard model. By utilizing right-censored data in its training process, the method demonstrated its state-of-the-art predictive power in ALS survival ranking. Its feature selection identified multiple important factors, some of which conflicts with previous studies.

  17. Airborne geophysical radon hazard mapping

    International Nuclear Information System (INIS)

    Walker, P.

    1993-01-01

    Shales containing uranium pose a radon health hazard even when covered by several meters of overburden. Such an alum shale in southern Norway has been mapped with a joint helicopter borne electromagnetic (HEM) and radiometric survey. Results are compared with ground spectrometer, radon emanometer and radon gas measurements in dwellings, and a model to predict radon gas concentrations from the airborne data is developed. Since the shale is conductive, combining the HEM data with the radiometric channel allows the shale to be mapped with greater reliability than if the radiometric channel were used alone. Radiometrically more active areas which do not pose a radon gas hazard can thus be separated from the shales which do. The ground follow-up work consisted of spectrometer and radon emanometer measurements over a uranium anomaly coinciding with a conductor. The correlation between the airborne uranium channel, the ground uranium channel and emanometry is extremely good, indicating that airborne geophysics can, in this case, be used to predict areas having a high radon potential. Contingency tables comparing both radon exhalation and concentration in dwellings with the airborne uranium data show a strong relationship exists between exhalation and the airborne data and while a relationship between concentration and the airborne data is present, but weaker

  18. Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO

    Directory of Open Access Journals (Sweden)

    Lixin Yan

    2016-07-01

    Full Text Available The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1 the Markov blanket (MB algorithm is employed to extract the main factors associated with hazardous traffic events; (2 a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle’s speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G have significant influences on hazardous traffic events. The sequential minimal optimization (SMO algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles.

  19. Topography and geology site effects from the intensity prediction model (ShakeMap) for Austria

    Science.gov (United States)

    del Puy Papí Isaba, María; Jia, Yan; Weginger, Stefan

    2017-04-01

    The seismicity in Austria can be categorized as moderated. Despite the fact that the hazard seems to be rather low, earthquakes can cause great damage and losses, specially in densely populated and industrialized areas. It is well known, that equations which predict intensity as a function of magnitude and distance, among other parameters, are useful tool for hazard and risk assessment. Therefore, this study aims to determine an empirical model of the ground shaking intensities (ShakeMap) of a series of earthquakes occurred in Austria between 1000 and 2014. Furthermore, the obtained empirical model will lead to further interpretation of both, contemporary and historical earthquakes. A total of 285 events, which epicenters were located in Austria, and a sum of 22.739 reported macreoseismic data points from Austria and adjoining countries, were used. These events are enclosed in the period 1000-2014 and characterized by having a local magnitude greater than 3. In the first state of the model development, the data was careful selected, e.g. solely intensities equal or greater than III were used. In a second state the data was adjusted to the selected empirical model. Finally, geology and topography corrections were obtained by means of the model residuals in order to derive intensity-based site amplification effects.

  20. Introduction: Hazard mapping

    Science.gov (United States)

    Baum, Rex L.; Miyagi, Toyohiko; Lee, Saro; Trofymchuk, Oleksandr M

    2014-01-01

    Twenty papers were accepted into the session on landslide hazard mapping for oral presentation. The papers presented susceptibility and hazard analysis based on approaches ranging from field-based assessments to statistically based models to assessments that combined hydromechanical and probabilistic components. Many of the studies have taken advantage of increasing availability of remotely sensed data and nearly all relied on Geographic Information Systems to organize and analyze spatial data. The studies used a range of methods for assessing performance and validating hazard and susceptibility models. A few of the studies presented in this session also included some element of landslide risk assessment. This collection of papers clearly demonstrates that a wide range of approaches can lead to useful assessments of landslide susceptibility and hazard.

  1. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    Science.gov (United States)

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  2. Spatial prediction of landslide susceptibility using an adaptive neuro-fuzzy inference system combined with frequency ratio, generalized additive model, and support vector machine techniques

    Science.gov (United States)

    Chen, Wei; Pourghasemi, Hamid Reza; Panahi, Mahdi; Kornejady, Aiding; Wang, Jiale; Xie, Xiaoshen; Cao, Shubo

    2017-11-01

    The spatial prediction of landslide susceptibility is an important prerequisite for the analysis of landslide hazards and risks in any area. This research uses three data mining techniques, such as an adaptive neuro-fuzzy inference system combined with frequency ratio (ANFIS-FR), a generalized additive model (GAM), and a support vector machine (SVM), for landslide susceptibility mapping in Hanyuan County, China. In the first step, in accordance with a review of the previous literature, twelve conditioning factors, including slope aspect, altitude, slope angle, topographic wetness index (TWI), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, land use, normalized difference vegetation index (NDVI), and lithology, were selected. In the second step, a collinearity test and correlation analysis between the conditioning factors and landslides were applied. In the third step, we used three advanced methods, namely, ANFIS-FR, GAM, and SVM, for landslide susceptibility modeling. Subsequently, the results of their accuracy were validated using a receiver operating characteristic curve. The results showed that all three models have good prediction capabilities, while the SVM model has the highest prediction rate of 0.875, followed by the ANFIS-FR and GAM models with prediction rates of 0.851 and 0.846, respectively. Thus, the landslide susceptibility maps produced in the study area can be applied for management of hazards and risks in landslide-prone Hanyuan County.

  3. A multidimensional stability model for predicting shallow landslide size and shape across landscapes.

    Science.gov (United States)

    Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E

    2014-11-01

    The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data.

  4. Hazard function theory for nonstationary natural hazards

    Science.gov (United States)

    Read, Laura K.; Vogel, Richard M.

    2016-04-01

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.

  5. Hazard Assessment in a Big Data World

    Science.gov (United States)

    Kossobokov, Vladimir; Nekrasova, Anastasia

    2017-04-01

    Open data in a Big Data World provides unprecedented opportunities for enhancing scientific studies and better understanding of the Earth System. At the same time, it opens wide avenues for deceptive associations in inter- and transdisciplinary data misleading to erroneous predictions, which are unacceptable for implementation. Even the advanced tools of data analysis may lead to wrong assessments when inappropriately used to describe the phenomenon under consideration. A (self-) deceptive conclusion could be avoided by verification of candidate models in experiments on empirical data and in no other way. Seismology is not an exception. Moreover, seismic evidences accumulated to-date demonstrate clearly that most of the empirical relations commonly accepted in early history of instrumental seismology can be proved erroneous when subjected to objective hypothesis testing. In many cases of seismic hazard assessment (SHA), either probabilistic or deterministic, term-less or short-term, the claims of a high potential of a model forecasts are based on a flawed application of statistics and, therefore, are hardly suitable for communication to decision makers, which situation creates numerous deception points and resulted controversies. So far, most, if not all, the standard probabilistic methods to assess seismic hazard and associated risks are based on subjective, commonly unrealistic, and even erroneous assumptions about seismic recurrence and none of the proposed short-term precursory signals showed sufficient evidence to be used as a reliable precursor of catastrophic earthquakes. Accurate testing against real observations must be done in advance claiming seismically hazardous areas and/or times. The set of errors of the first and second kind in such a comparison permits evaluating the SHA method effectiveness and determining the optimal choice of parameters in regard to a user-defined cost-benefit function. The information obtained in testing experiments may supply

  6. Developing and validating a new precise risk-prediction model for new-onset hypertension: The Jichi Genki hypertension prediction model (JG model).

    Science.gov (United States)

    Kanegae, Hiroshi; Oikawa, Takamitsu; Suzuki, Kenji; Okawara, Yukie; Kario, Kazuomi

    2018-03-31

    No integrated risk assessment tools that include lifestyle factors and uric acid have been developed. In accordance with the Industrial Safety and Health Law in Japan, a follow-up examination of 63 495 normotensive individuals (mean age 42.8 years) who underwent a health checkup in 2010 was conducted every year for 5 years. The primary endpoint was new-onset hypertension (systolic blood pressure [SBP]/diastolic blood pressure [DBP] ≥ 140/90 mm Hg and/or the initiation of antihypertensive medications with self-reported hypertension). During the mean 3.4 years of follow-up, 7402 participants (11.7%) developed hypertension. The prediction model included age, sex, body mass index (BMI), SBP, DBP, low-density lipoprotein cholesterol, uric acid, proteinuria, current smoking, alcohol intake, eating rate, DBP by age, and BMI by age at baseline and was created by using Cox proportional hazards models to calculate 3-year absolute risks. The derivation analysis confirmed that the model performed well both with respect to discrimination and calibration (n = 63 495; C-statistic = 0.885, 95% confidence interval [CI], 0.865-0.903; χ 2 statistic = 13.6, degree of freedom [df] = 7). In the external validation analysis, moreover, the model performed well both in its discrimination and calibration characteristics (n = 14 168; C-statistic = 0.846; 95%CI, 0.775-0.905; χ 2 statistic = 8.7, df = 7). Adding LDL cholesterol, uric acid, proteinuria, alcohol intake, eating rate, and BMI by age to the base model yielded a significantly higher C-statistic, net reclassification improvement (NRI), and integrated discrimination improvement, especially NRI non-event (NRI = 0.127, 95%CI = 0.100-0.152; NRI non-event  = 0.108, 95%CI = 0.102-0.117). In conclusion, a highly precise model with good performance was developed for predicting incident hypertension using the new parameters of eating rate, uric acid, proteinuria, and BMI by age. ©2018 Wiley Periodicals, Inc.

  7. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  8. Sickness absence, moral hazard, and the business cycle.

    Science.gov (United States)

    Pichler, Stefan

    2015-06-01

    The procyclical nature of sickness absence has been documented by many scholars in literature. So far, explanations have been based on labor force composition and reduced moral hazard caused by fear of job loss during recessions. In this paper, we propose and test a third mechanism caused by reduced moral hazard during booms and infections. We suggest that the workload is higher during economic booms and thus employees have to go to work despite being sick. In a theoretical model focusing on infectious diseases, we show that this will provoke infections of coworkers leading to overall higher sickness absence during economic upturns. Using state-level aggregated data from 112 German public health insurance funds (out of 145 in total), we find that sickness absence due to infectious diseases shows the largest procyclical pattern, as predicted by our theoretical model. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Challenges in understanding, modelling, and mitigating Lake Outburst Flood Hazard: experiences from Central Asia

    Science.gov (United States)

    Mergili, Martin; Schneider, Demian; Andres, Norina; Worni, Raphael; Gruber, Fabian; Schneider, Jean F.

    2010-05-01

    Lake Outburst Floods can evolve from complex process chains like avalanches of rock or ice that produce flood waves in a lake which may overtop and eventually breach glacial, morainic, landslide, or artificial dams. Rising lake levels can lead to progressive incision and destabilization of a dam, to enhanced ground water flow (piping), or even to hydrostatic failure of ice dams which can cause sudden outflow of accumulated water. These events often have a highly destructive potential because a large amount of water is released in a short time, with a high capacity to erode loose debris, leading to a powerful debris flow with a long travel distance. The best-known example of a lake outburst flood is the Vajont event (Northern Italy, 1963), where a landslide rushed into an artificial lake which spilled over and caused a flood leading to almost 2000 fatalities. Hazards from the failure of landslide dams are often (not always) fairly manageable: most breaches occur in the first few days or weeks after the landslide event and the rapid construction of a spillway - though problematic - has solved some hazardous situations (e.g. in the case of Hattian landslide in 2005 in Pakistan). Older dams, like Usoi dam (Lake Sarez) in Tajikistan, are usually fairly stable, though landsildes into the lakes may create floodwaves overtopping and eventually weakening the dams. The analysis and the mitigation of glacial lake outburst flood (GLOF) hazard remains a challenge. A number of GLOFs resulting in fatalities and severe damage have occurred during the previous decades, particularly in the Himalayas and in the mountains of Central Asia (Pamir, Tien Shan). The source area is usually far away from the area of impact and events occur at very long intervals or as singularities, so that the population at risk is usually not prepared. Even though potentially hazardous lakes can be identified relatively easily with remote sensing and field work, modeling and predicting of GLOFs (and also

  10. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  11. Developments in consequence modelling of accidental releases of hazardous materials

    NARCIS (Netherlands)

    Boot, H.

    2012-01-01

    The modelling of consequences of releases of hazardous materials in the Netherlands has mainly been based on the “Yellow Book”. Although there is no updated version of this official publication, new insights have been developed during the last decades. This article will give an overview of new

  12. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  13. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  14. Probabilistic Seismic Hazard Assessment for Himalayan-Tibetan Region from Historical and Instrumental Earthquake Catalogs

    Science.gov (United States)

    Rahman, M. Moklesur; Bai, Ling; Khan, Nangyal Ghani; Li, Guohui

    2018-02-01

    The Himalayan-Tibetan region has a long history of devastating earthquakes with wide-spread casualties and socio-economic damages. Here, we conduct the probabilistic seismic hazard analysis by incorporating the incomplete historical earthquake records along with the instrumental earthquake catalogs for the Himalayan-Tibetan region. Historical earthquake records back to more than 1000 years ago and an updated, homogenized and declustered instrumental earthquake catalog since 1906 are utilized. The essential seismicity parameters, namely, the mean seismicity rate γ, the Gutenberg-Richter b value, and the maximum expected magnitude M max are estimated using the maximum likelihood algorithm assuming the incompleteness of the catalog. To compute the hazard value, three seismogenic source models (smoothed gridded, linear, and areal sources) and two sets of ground motion prediction equations are combined by means of a logic tree on accounting the epistemic uncertainties. The peak ground acceleration (PGA) and spectral acceleration (SA) at 0.2 and 1.0 s are predicted for 2 and 10% probabilities of exceedance over 50 years assuming bedrock condition. The resulting PGA and SA maps show a significant spatio-temporal variation in the hazard values. In general, hazard value is found to be much higher than the previous studies for regions, where great earthquakes have actually occurred. The use of the historical and instrumental earthquake catalogs in combination of multiple seismogenic source models provides better seismic hazard constraints for the Himalayan-Tibetan region.

  15. Remote sensing and GIS-based landslide hazard analysis and cross-validation using multivariate logistic regression model on three test areas in Malaysia

    Science.gov (United States)

    Pradhan, Biswajeet

    2010-05-01

    This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross

  16. Relationship of Predicted Risk of Developing Invasive Breast Cancer, as Assessed with Three Models, and Breast Cancer Mortality among Breast Cancer Patients.

    Directory of Open Access Journals (Sweden)

    Mark E Sherman

    Full Text Available Breast cancer risk prediction models are used to plan clinical trials and counsel women; however, relationships of predicted risks of breast cancer incidence and prognosis after breast cancer diagnosis are unknown.Using largely pre-diagnostic information from the Breast Cancer Surveillance Consortium (BCSC for 37,939 invasive breast cancers (1996-2007, we estimated 5-year breast cancer risk (<1%; 1-1.66%; ≥1.67% with three models: BCSC 1-year risk model (BCSC-1; adapted to 5-year predictions; Breast Cancer Risk Assessment Tool (BCRAT; and BCSC 5-year risk model (BCSC-5. Breast cancer-specific mortality post-diagnosis (range: 1-13 years; median: 5.4-5.6 years was related to predicted risk of developing breast cancer using unadjusted Cox proportional hazards models, and in age-stratified (35-44; 45-54; 55-69; 70-89 years models adjusted for continuous age, BCSC registry, calendar period, income, mode of presentation, stage and treatment. Mean age at diagnosis was 60 years.Of 6,021 deaths, 2,993 (49.7% were ascribed to breast cancer. In unadjusted case-only analyses, predicted breast cancer risk ≥1.67% versus <1.0% was associated with lower risk of breast cancer death; BCSC-1: hazard ratio (HR = 0.82 (95% CI = 0.75-0.90; BCRAT: HR = 0.72 (95% CI = 0.65-0.81 and BCSC-5: HR = 0.84 (95% CI = 0.75-0.94. Age-stratified, adjusted models showed similar, although mostly non-significant HRs. Among women ages 55-69 years, HRs approximated 1.0. Generally, higher predicted risk was inversely related to percentages of cancers with unfavorable prognostic characteristics, especially among women 35-44 years.Among cases assessed with three models, higher predicted risk of developing breast cancer was not associated with greater risk of breast cancer death; thus, these models would have limited utility in planning studies to evaluate breast cancer mortality reduction strategies. Further, when offering women counseling, it may be useful to note that high

  17. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest

    OpenAIRE

    Suh, Jangwon; Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon

    2017-01-01

    In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or t...

  18. Modeling emergency evacuation for major hazard industrial sites

    International Nuclear Information System (INIS)

    Georgiadou, Paraskevi S.; Papazoglou, Ioannis A.; Kiranoudis, Chris T.; Markatos, Nikolaos C.

    2007-01-01

    A model providing the temporal and spatial distribution of the population under evacuation around a major hazard facility is developed. A discrete state stochastic Markov process simulates the movement of the evacuees. The area around the hazardous facility is divided into nodes connected among themselves with links representing the road system of the area. Transition from node-to-node is simulated as a random process where the probability of transition depends on the dynamically changed states of the destination and origin nodes and on the link between them. Solution of the Markov process provides the expected distribution of the evacuees in the nodes of the area as a function of time. A Monte Carlo solution of the model provides in addition a sample of actual trajectories of the evacuees. This information coupled with an accident analysis which provides the spatial and temporal distribution of the extreme phenomenon following an accident, determines a sample of the actual doses received by the evacuees. Both the average dose and the actual distribution of doses are then used as measures in evaluating alternative emergency response strategies. It is shown that in some cases the estimation of the health consequences by the average dose might be either too conservative or too non-conservative relative to the one corresponding to the distribution of the received dose and hence not a suitable measure to evaluate alternative evacuation strategies

  19. Combining Spatial Models for Shallow Landslides and Debris-Flows Prediction

    Directory of Open Access Journals (Sweden)

    Eurípedes Vargas do Amaral

    2013-05-01

    Full Text Available Mass movements in Brazil are common phenomena, especially during strong rainfall events that occur frequently in the summer season. These phenomena cause losses of lives and serious damage to roads, bridges, and properties. Moreover, the illegal occupation by slums on the slopes around the cities intensifies the effect of the mass movement. This study aimed to develop a methodology that combines models of shallow landslides and debris-flows in order to create a map with landslides initiation and debris-flows volume and runout distance. The study area comprised of two catchments in Rio de Janeiro city: Quitite and Papagaio that drained side by side the west flank of the Maciço da Tijuca, with an area of 5 km2. The method included the following steps: (a location of the susceptible areas to landslides using SHALSTAB model; (b determination of rheological parameters of debris-flow from the back-analysis technique; and (c combination of SHALSTAB and FLO-2D models to delineate the areas more susceptible to mass movements. These scenarios were compared with the landslide and debris-flow event of February 1996. Many FLO-2D simulations were exhaustively made to estimate the rheological parameters from the back-analysis technique. Those rheological coefficients of single simulation were back-calculated by adjusting with area and depth of the debris-flow obtained from field data. The initial material volume in the FLO-2D simulations was estimated from SHALSTAB model. The combination of these two mathematical models, SHALSTAB and FLO-2D, was able to predict both landslides and debris-flow events. Such procedures can reduce the casualties and property damage, delineating hazard areas, to estimate hazard intensities for input into risk studies providing information for public policy and planning.

  20. PRELIMINARY CONSIDERATIONS ON SUSHI AS POTENTIALLY HAZARDOUS FOOD

    OpenAIRE

    G. Liuzzo; P. Bonilauri; R. Leonelli; A. Serraino; S. Bentley

    2011-01-01

    The Authors studied physicochemical properties (pH and Aw) of samples of Nigiri sushi and their ingredients along their shelf life, integrating those results with a predictive microbiological model, in order to determine or to rule out the growth of Listeria monocytogenes above the thresholds set by Reg.(EU) 2073/2005. Results point towards substantial containment of the target biological hazard, even though the prevention of thermal abuse is a keypoint in increasing safety.

  1. Building and validation of a prognostic model for predicting extracorporeal circuit clotting in patients with continuous renal replacement therapy.

    Science.gov (United States)

    Fu, Xia; Liang, Xinling; Song, Li; Huang, Huigen; Wang, Jing; Chen, Yuanhan; Zhang, Li; Quan, Zilin; Shi, Wei

    2014-04-01

    To develop a predictive model for circuit clotting in patients with continuous renal replacement therapy (CRRT). A total of 425 cases were selected. 302 cases were used to develop a predictive model of extracorporeal circuit life span during CRRT without citrate anticoagulation in 24 h, and 123 cases were used to validate the model. The prediction formula was developed using multivariate Cox proportional-hazards regression analysis, from which a risk score was assigned. The mean survival time of the circuit was 15.0 ± 1.3 h, and the rate of circuit clotting was 66.6 % during 24 h of CRRT. Five significant variables were assigned a predicting score according to the regression coefficient: insufficient blood flow, no anticoagulation, hematocrit ≥0.37, lactic acid of arterial blood gas analysis ≤3 mmol/L and APTT R (2) = 0.232; P = 0.301). A risk score that includes the five above-mentioned variables can be used to predict the likelihood of extracorporeal circuit clotting in patients undergoing CRRT.

  2. Estimating drought risk across Europe from reported drought impacts, hazard indicators and vulnerability factors

    Science.gov (United States)

    Blauhut, V.; Stahl, K.; Stagge, J. H.; Tallaksen, L. M.; De Stefano, L.; Vogt, J.

    2015-12-01

    Drought is one of the most costly natural hazards in Europe. Due to its complexity, drought risk, the combination of the natural hazard and societal vulnerability, is difficult to define and challenging to detect and predict, as the impacts of drought are very diverse, covering the breadth of socioeconomic and environmental systems. Pan-European maps of drought risk could inform the elaboration of guidelines and policies to address its documented severity and impact across borders. This work (1) tests the capability of commonly applied hazard indicators and vulnerability factors to predict annual drought impact occurrence for different sectors and macro regions in Europe and (2) combines information on past drought impacts, drought hazard indicators, and vulnerability factors into estimates of drought risk at the pan-European scale. This "hybrid approach" bridges the gap between traditional vulnerability assessment and probabilistic impact forecast in a statistical modelling framework. Multivariable logistic regression was applied to predict the likelihood of impact occurrence on an annual basis for particular impact categories and European macro regions. The results indicate sector- and macro region specific sensitivities of hazard indicators, with the Standardised Precipitation Evapotranspiration Index for a twelve month aggregation period (SPEI-12) as the overall best hazard predictor. Vulnerability factors have only limited ability to predict drought impacts as single predictor, with information about landuse and water resources as best vulnerability-based predictors. (3) The application of the "hybrid approach" revealed strong regional (NUTS combo level) and sector specific differences in drought risk across Europe. The majority of best predictor combinations rely on a combination of SPEI for shorter and longer aggregation periods, and a combination of information on landuse and water resources. The added value of integrating regional vulnerability information

  3. Hazard interactions and interaction networks (cascades) within multi-hazard methodologies

    Science.gov (United States)

    Gill, Joel C.; Malamud, Bruce D.

    2016-08-01

    This paper combines research and commentary to reinforce the importance of integrating hazard interactions and interaction networks (cascades) into multi-hazard methodologies. We present a synthesis of the differences between multi-layer single-hazard approaches and multi-hazard approaches that integrate such interactions. This synthesis suggests that ignoring interactions between important environmental and anthropogenic processes could distort management priorities, increase vulnerability to other spatially relevant hazards or underestimate disaster risk. In this paper we proceed to present an enhanced multi-hazard framework through the following steps: (i) description and definition of three groups (natural hazards, anthropogenic processes and technological hazards/disasters) as relevant components of a multi-hazard environment, (ii) outlining of three types of interaction relationship (triggering, increased probability, and catalysis/impedance), and (iii) assessment of the importance of networks of interactions (cascades) through case study examples (based on the literature, field observations and semi-structured interviews). We further propose two visualisation frameworks to represent these networks of interactions: hazard interaction matrices and hazard/process flow diagrams. Our approach reinforces the importance of integrating interactions between different aspects of the Earth system, together with human activity, into enhanced multi-hazard methodologies. Multi-hazard approaches support the holistic assessment of hazard potential and consequently disaster risk. We conclude by describing three ways by which understanding networks of interactions contributes to the theoretical and practical understanding of hazards, disaster risk reduction and Earth system management. Understanding interactions and interaction networks helps us to better (i) model the observed reality of disaster events, (ii) constrain potential changes in physical and social vulnerability

  4. Multi-omics facilitated variable selection in Cox-regression model for cancer prognosis prediction.

    Science.gov (United States)

    Liu, Cong; Wang, Xujun; Genchev, Georgi Z; Lu, Hui

    2017-07-15

    New developments in high-throughput genomic technologies have enabled the measurement of diverse types of omics biomarkers in a cost-efficient and clinically-feasible manner. Developing computational methods and tools for analysis and translation of such genomic data into clinically-relevant information is an ongoing and active area of investigation. For example, several studies have utilized an unsupervised learning framework to cluster patients by integrating omics data. Despite such recent advances, predicting cancer prognosis using integrated omics biomarkers remains a challenge. There is also a shortage of computational tools for predicting cancer prognosis by using supervised learning methods. The current standard approach is to fit a Cox regression model by concatenating the different types of omics data in a linear manner, while penalty could be added for feature selection. A more powerful approach, however, would be to incorporate data by considering relationships among omics datatypes. Here we developed two methods: a SKI-Cox method and a wLASSO-Cox method to incorporate the association among different types of omics data. Both methods fit the Cox proportional hazards model and predict a risk score based on mRNA expression profiles. SKI-Cox borrows the information generated by these additional types of omics data to guide variable selection, while wLASSO-Cox incorporates this information as a penalty factor during model fitting. We show that SKI-Cox and wLASSO-Cox models select more true variables than a LASSO-Cox model in simulation studies. We assess the performance of SKI-Cox and wLASSO-Cox using TCGA glioblastoma multiforme and lung adenocarcinoma data. In each case, mRNA expression, methylation, and copy number variation data are integrated to predict the overall survival time of cancer patients. Our methods achieve better performance in predicting patients' survival in glioblastoma and lung adenocarcinoma. Copyright © 2017. Published by Elsevier

  5. Natural Hazards, Second Edition

    Science.gov (United States)

    Rouhban, Badaoui

    Natural disaster loss is on the rise, and the vulnerability of the human and physical environment to the violent forces of nature is increasing. In many parts of the world, disasters caused by natural hazards such as earthquakes, floods, landslides, drought, wildfires, intense windstorms, tsunami, and volcanic eruptions have caused the loss of human lives, injury, homelessness, and the destruction of economic and social infrastructure. Over the last few years, there has been an increase in the occurrence, severity, and intensity of disasters, culminating with the devastating tsunami of 26 December 2004 in South East Asia.Natural hazards are often unexpected or uncontrollable natural events of varying magnitude. Understanding their mechanisms and assessing their distribution in time and space are necessary for refining risk mitigation measures. This second edition of Natural Hazards, (following a first edition published in 1991 by Cambridge University Press), written by Edward Bryant, associate dean of science at Wollongong University, Australia, grapples with this crucial issue, aspects of hazard prediction, and other issues. The book presents a comprehensive analysis of different categories of hazards of climatic and geological origin.

  6. Guidance document on practices to model and implement Earthquake hazards in extended PSA (final version). Volume 1

    International Nuclear Information System (INIS)

    Decker, K.; Hirata, K.; Groudev, P.

    2016-01-01

    The current report provides guidance for the assessment of seismo-tectonic hazards in level 1 and 2 PSA. The objective is to review existing guidance, identify methodological challenges, and to propose novel guidance on key issues. Guidance for the assessment of vibratory ground motion and fault capability comprises the following: - listings of data required for the hazard assessment and methods to estimate data quality and completeness; - in-depth discussion of key input parameters required for hazard models; - discussions on commonly applied hazard assessment methodologies; - references to recent advances of science and technology. Guidance on the assessment of correlated or coincident hazards comprises of chapters on: - screening of correlated hazards; - assessment of correlated hazards (natural and man-made); - assessment of coincident hazards. (authors)

  7. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  8. Revised seismic hazard map for the Kyrgyz Republic

    Science.gov (United States)

    Fleming, Kevin; Ullah, Shahid; Parolai, Stefano; Walker, Richard; Pittore, Massimiliano; Free, Matthew; Fourniadis, Yannis; Villiani, Manuela; Sousa, Luis; Ormukov, Cholponbek; Moldobekov, Bolot; Takeuchi, Ko

    2017-04-01

    As part of a seismic risk study sponsored by the World Bank, a revised seismic hazard map for the Kyrgyz Republic has been produced, using the OpenQuake-engine developed by the Global Earthquake Model Foundation (GEM). In this project, an earthquake catalogue spanning a period from 250 BCE to 2014 was compiled and processed through spatial and temporal declustering tools. The territory of the Kyrgyz Republic was divided into 31 area sources defined based on local seismicity, including a total area covering 200 km from the border. The results are presented in terms of Peak Ground Acceleration (PGA). In addition, macroseismic intensity estimates, making use of recent intensity prediction equations, were also provided, given that this measure is still widely used in Central Asia. In order to accommodate the associated epistemic uncertainty, three ground motion prediction equations were used in a logic tree structure. A set of representative earthquake scenarios were further identified based on historical data and the nature of the considered faults. The resulting hazard map, as expected, follows the country's seismicity, with the highest levels of hazard in the northeast, south and southwest of the country, with an elevated part around the centre. When considering PGA, the hazard is slightly greater for major urban centres than in previous works (e.g., Abdrakhmatov et al., 2003), although the macroseismic intensity estimates are less than previous studies, e.g., Ulomov (1999). For the scenario assessments, the examples that most affect the urban centres assessed are the Issyk Ata fault (in particular for Bishkek), the Chilik and Kemin faults (in particular Balykchy and Karakol), the Ferghana Valley fault system (in particular Osh, Jalah-Abad and Uzgen), the Oinik Djar fault (Naryn) and the central and western Talas-Ferghanafaukt (Talas). Finally, while site effects (in particular, those dependent on the upper-most geological structure) have an obvious effect on the

  9. An emerging methodology of slope hazard assessment for natural gas pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z.J.; O' Neil, G.; Rizkalla, M. [TransCanada PipeLines Ltd., Calgary, AB (Canada)

    2000-07-01

    A new slope assessment methodology has been developed by TransCanada PipeLines Ltd. in an effort to switch from a reactive to a proactive hazard management approach and to optimize maintenance expenditure. The company operates 37,000 km of natural gas gathering and transmission pipelines, portions of which traverse slopes and stream crossings. The newly developed rainfall-ground movement model provides site-specific ground movement predictions for approximately 1100 slopes and establishes a risk-ranked list of slopes upon which maintenance decisions can be based. The input to the predictive model is derived from internal and public information regarding site conditions. This information serves as input to a pipe-soil interaction model to determine the probability of pipeline failure for each slope. The ground movement for this model is limited to creep-type which is typically less than 100 mm per year. Landslides are not addressed in this paper. A system-wide database has been constructed for slopes to prioritize the slope movement hazards. The slope information includes geotechnical data such as bedrock geology, surficial geology, slope details, precipitation and erosion potential. Information related to the pipeline includes the location, age, size, as well as design pressure and temperature. 13 refs., 2 figs.

  10. Using Multi-Scenario Tsunami Modelling Results combined with Probabilistic Analyses to provide Hazard Information for the South-WestCoast of Indonesia

    Science.gov (United States)

    Zosseder, K.; Post, J.; Steinmetz, T.; Wegscheider, S.; Strunz, G.

    2009-04-01

    Indonesia is located at one of the most active geological subduction zones in the world. Following the most recent seaquakes and their subsequent tsunamis in December 2004 and July 2006 it is expected that also in the near future tsunamis are likely to occur due to increased tectonic tensions leading to abrupt vertical seafloor alterations after a century of relative tectonic silence. To face this devastating threat tsunami hazard maps are very important as base for evacuation planning and mitigation strategies. In terms of a tsunami impact the hazard assessment is mostly covered by numerical modelling because the model results normally offer the most precise database for a hazard analysis as they include spatially distributed data and their influence to the hydraulic dynamics. Generally a model result gives a probability for the intensity distribution of a tsunami at the coast (or run up) and the spatial distribution of the maximum inundation area depending on the location and magnitude of the tsunami source used. The boundary condition of the source used for the model is mostly chosen by a worst case approach. Hence the location and magnitude which are likely to occur and which are assumed to generate the worst impact are used to predict the impact at a specific area. But for a tsunami hazard assessment covering a large coastal area, as it is demanded in the GITEWS (German Indonesian Tsunami Early Warning System) project in which the present work is embedded, this approach is not practicable because a lot of tsunami sources can cause an impact at the coast and must be considered. Thus a multi-scenario tsunami model approach is developed to provide a reliable hazard assessment covering large areas. For the Indonesian Early Warning System many tsunami scenarios were modelled by the Alfred Wegener Institute (AWI) at different probable tsunami sources and with different magnitudes along the Sunda Trench. Every modelled scenario delivers the spatial distribution of

  11. Multiple Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California

    Science.gov (United States)

    Pike, Richard J.; Graymer, Russell W.

    2008-01-01

    With the exception of Los Angeles, perhaps no urban area in the United States is more at risk from landsliding, triggered by either precipitation or earthquake, than the San Francisco Bay region of northern California. By January each year, seasonal winter storms usually bring moisture levels of San Francisco Bay region hillsides to the point of saturation, after which additional heavy rainfall may induce landslides of various types and levels of severity. In addition, movement at any time along one of several active faults in the area may generate an earthquake large enough to trigger landslides. The danger to life and property rises each year as local populations continue to expand and more hillsides are graded for development of residential housing and its supporting infrastructure. The chapters in the text consist of: *Introduction by Russell W. Graymer *Chapter 1 Rainfall Thresholds for Landslide Activity, San Francisco Bay Region, Northern California by Raymond C. Wilson *Chapter 2 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike and Steven Sobieszczyk *Chapter 3 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven Sobieszczyk *Chapter 4 Landslide Hazard Modeled for the Cities of Oakland, Piedmont, and Berkeley, Northern California, from a M=7.1 Scenario Earthquake on the Hayward Fault Zone by Scott B. Miles and David K. Keefer *Chapter 5 Synthesis of Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike The plates consist of: *Plate 1 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike, Russell W. Graymer, Sebastian Roberts, Naomi B. Kalman, and Steven Sobieszczyk *Plate 2 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven

  12. Role of beach morphology in wave overtopping hazard assessment

    Science.gov (United States)

    Phillips, Benjamin; Brown, Jennifer; Bidlot, Jean-Raymond; Plater, Andrew

    2017-04-01

    Understanding the role of beach morphology in controlling wave overtopping volume will further minimise uncertainties in flood risk assessments at coastal locations defended by engineered structures worldwide. XBeach is used to model wave overtopping volume for a 1:200 yr joint probability distribution of waves and water levels with measured, pre- and post-storm beach profiles. The simulation with measured bathymetry is repeated with and without morphological evolution enabled during the modelled storm event. This research assesses the role of morphology in controlling wave overtopping volumes for hazardous events that meet the typical design level of coastal defence structures. Results show disabling storm-driven morphology under-represents modelled wave overtopping volumes by up to 39% under high Hs conditions, and has a greater impact on the wave overtopping rate than the variability applied within the boundary conditions due to the range of wave-water level combinations that meet the 1:200 yr joint probability criterion. Accounting for morphology in flood modelling is therefore critical for accurately predicting wave overtopping volumes and the resulting flood hazard and to assess economic losses.

  13. Assessment of volcanic hazards, vulnerability, risk and uncertainty (Invited)

    Science.gov (United States)

    Sparks, R. S.

    2009-12-01

    many sources of uncertainty in forecasting the areas that volcanic activity will effect and the severity of the effects. Uncertainties arise from: natural variability, inadequate data, biased data, incomplete data, lack of understanding of the processes, limitations to predictive models, ambiguity, and unknown unknowns. The description of volcanic hazards is thus necessarily probabilistic and requires assessment of the attendant uncertainties. Several issues arise from the probabilistic nature of volcanic hazards and the intrinsic uncertainties. Although zonation maps require well-defined boundaries for administrative pragmatism, such boundaries cannot divide areas that are completely safe from those that are unsafe. Levels of danger or safety need to be defined to decide on and justify boundaries through the concepts of vulnerability and risk. More data, better observations, improved models may reduce uncertainties, but can increase uncertainties and may lead to re-appraisal of zone boundaries. Probabilities inferred by statistical techniques are hard to communicate. Expert elicitation is an emerging methodology for risk assessment and uncertainty evaluation. The method has been applied at one major volcanic crisis (Soufrière Hills Volcano, Montserrat), and is being applied in planning for volcanic crises at Vesuvius.

  14. Site characterization and modeling to estimate movement of hazardous materials in groundwater

    International Nuclear Information System (INIS)

    Ditmars, J.D.

    1988-01-01

    A quantitative approach for evaluating the effectiveness of site characterization measurement activities is developed and illustrated with an example application to hypothetical measurement schemes at a potential geologic repository site for radioactive waste. The method is a general one and could also be applied at sites for underground disposal of hazardous chemicals. The approach presumes that measurements will be undertaken to support predictions of the performance of some aspect of a constructed facility or natural system. It requires a quantitative performance objective, such as groundwater travel time or contaminant concentration, against which to compare predictions of performance. The approach recognizes that such predictions are uncertain because the measurements upon which they are based are uncertain. The effectiveness of measurement activities is quantified by a confidence index, β, that reflects the number of standard deviations separating the best estimate of performance from the perdetermined performance objective. Measurements that reduce the uncertainty in predictions lead to increased values of β. The link between measurement and prediction uncertainties, required for the evaluation of β for a particular measurement scheme, identifies the measured quantities that significantly affect prediction uncertainty. The components of uncertainty in those key measurements are spatial variation, noise, estimation error, and measurement bias. 7 refs., 4 figs

  15. St. Louis area earthquake hazards mapping project; seismic and liquefaction hazard maps

    Science.gov (United States)

    Cramer, Chris H.; Bauer, Robert A.; Chung, Jae-won; Rogers, David; Pierce, Larry; Voigt, Vicki; Mitchell, Brad; Gaunt, David; Williams, Robert; Hoffman, David; Hempen, Gregory L.; Steckel, Phyllis; Boyd, Oliver; Watkins, Connor M.; Tucker, Kathleen; McCallister, Natasha

    2016-01-01

    We present probabilistic and deterministic seismic and liquefaction hazard maps for the densely populated St. Louis metropolitan area that account for the expected effects of surficial geology on earthquake ground shaking. Hazard calculations were based on a map grid of 0.005°, or about every 500 m, and are thus higher in resolution than any earlier studies. To estimate ground motions at the surface of the model (e.g., site amplification), we used a new detailed near‐surface shear‐wave velocity model in a 1D equivalent‐linear response analysis. When compared with the 2014 U.S. Geological Survey (USGS) National Seismic Hazard Model, which uses a uniform firm‐rock‐site condition, the new probabilistic seismic‐hazard estimates document much more variability. Hazard levels for upland sites (consisting of bedrock and weathered bedrock overlain by loess‐covered till and drift deposits), show up to twice the ground‐motion values for peak ground acceleration (PGA), and similar ground‐motion values for 1.0 s spectral acceleration (SA). Probabilistic ground‐motion levels for lowland alluvial floodplain sites (generally the 20–40‐m‐thick modern Mississippi and Missouri River floodplain deposits overlying bedrock) exhibit up to twice the ground‐motion levels for PGA, and up to three times the ground‐motion levels for 1.0 s SA. Liquefaction probability curves were developed from available standard penetration test data assuming typical lowland and upland water table levels. A simplified liquefaction hazard map was created from the 5%‐in‐50‐year probabilistic ground‐shaking model. The liquefaction hazard ranges from low (60% of area expected to liquefy) in the lowlands. Because many transportation routes, power and gas transmission lines, and population centers exist in or on the highly susceptible lowland alluvium, these areas in the St. Louis region are at significant potential risk from seismically induced liquefaction and associated

  16. The Alcohol Use Disorders Identification Test for Consumption (AUDIT-C is more useful than pre-existing laboratory tests for predicting hazardous drinking: a cross-sectional study

    Directory of Open Access Journals (Sweden)

    Hideki Fujii

    2016-05-01

    Full Text Available Abstract Background It is important to screen for alcohol consumption and drinking customs in a standardized manner. The aim of this study was 1 to investigate whether the AUDIT score is useful for predicting hazardous drinking using optimal cutoff scores and 2 to use multivariate analysis to evaluate whether the AUDIT score was more useful than pre-existing laboratory tests for predicting hazardous drinking. Methods A cross-sectional study using the Alcohol Use Disorders Identification Test (AUDIT was conducted in 334 outpatients who consulted our internal medicine department. The patients completed self-reported questionnaires and underwent a diagnostic interview, physical examination, and laboratory testing. Results Forty (23 % male patients reported daily alcohol consumption ≥ 40 g, and 16 (10 % female patients reported consumption ≥ 20 g. The optimal cutoff values of hazardous drinking were calculated using a 10-fold cross validation, resulting in an optimal AUDIT score cutoff of 8.2, with a sensitivity of 95.5 %, specificity of 87.0 %, false positive rate of 13.0 %, false negative rate of 4.5 %, and area under the receiver operating characteristic curve of 0.97. Multivariate analysis revealed that the most popular short version of the AUDIT consisting solely of its three consumption items (AUDIT-C and patient sex were significantly associated with hazardous drinking. The aspartate transaminase (AST/alanine transaminase (ALT ratio and mean corpuscular volume (MCV were weakly significant. Conclusions This study showed that the AUDIT score and particularly the AUDIT-C score were more useful than the AST/ALT ratio and MCV for predicting hazardous drinking.

  17. GIS and RS-based modelling of potential natural hazard areas in Pehchevo municipality, Republic of Macedonia

    Directory of Open Access Journals (Sweden)

    Milevski Ivica

    2013-01-01

    Full Text Available In this paper, one approach of Geographic Information System (GIS and Remote Sensing (RS assessment of potential natural hazard areas (excess erosion, landslides, flash floods and fires is presented. For that purpose Pehchevo Municipality in the easternmost part of the Republic of Macedonia is selected as a case study area because of high local impact of natural hazards on the environment, social-demographic situation and local economy. First of all, most relevant static factors for each type of natural hazard are selected (topography, land cover, anthropogenic objects and infrastructure. With GIS and satellite imagery, multi-layer calculation is performed based on available traditional equations, clustering or discreditation procedures. In such way suitable relatively “static” natural hazard maps (models are produced. Then, dynamic (mostly climate related factors are included in previous models resulting in appropriate scenarios correlated with different amounts of precipitation, temperature, wind direction etc. Finally, GIS based scenarios are evaluated and tested with field check or very fine resolution Google Earth imagery showing good accuracy. Further development of such GIS models in connection with automatic remote meteorological stations and dynamic satellite imagery (like MODIS will provide on-time warning for coming natural hazard avoiding potential damages or even causalities.

  18. Probabilistic earthquake hazard analysis for Cairo, Egypt

    Science.gov (United States)

    Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan

    2016-04-01

    Cairo is the capital of Egypt and the largest city in the Arab world and Africa, and the sixteenth largest metropolitan area in the world. It was founded in the tenth century (969 ad) and is 1046 years old. It has long been a center of the region's political and cultural life. Therefore, the earthquake risk assessment for Cairo has a great importance. The present work aims to analysis the earthquake hazard of Cairo as a key input's element for the risk assessment. The regional seismotectonics setting shows that Cairo could be affected by both far- and near-field seismic sources. The seismic hazard of Cairo has been estimated using the probabilistic seismic hazard approach. The logic tree frame work was used during the calculations. Epistemic uncertainties were considered into account by using alternative seismotectonics models and alternative ground motion prediction equations. Seismic hazard values have been estimated within a grid of 0.1° × 0.1 ° spacing for all of Cairo's districts at different spectral periods and four return periods (224, 615, 1230, and 4745 years). Moreover, the uniform hazard spectra have been calculated at the same return periods. The pattern of the contour maps show that the highest values of the peak ground acceleration is concentrated in the eastern zone's districts (e.g., El Nozha) and the lowest values at the northern and western zone's districts (e.g., El Sharabiya and El Khalifa).

  19. Up-to-date Probabilistic Earthquake Hazard Maps for Egypt

    Science.gov (United States)

    Gaber, Hanan; El-Hadidy, Mahmoud; Badawy, Ahmed

    2018-04-01

    An up-to-date earthquake hazard analysis has been performed in Egypt using a probabilistic seismic hazard approach. Through the current study, we use a complete and homogenous earthquake catalog covering the time period between 2200 BC and 2015 AD. Three seismotectonic models representing the seismic activity in and around Egypt are used. A logic-tree framework is applied to allow for the epistemic uncertainty in the declustering parameters, minimum magnitude, seismotectonic setting and ground-motion prediction equations. The hazard analysis is performed for a grid of 0.5° × 0.5° in terms of types of rock site for the peak ground acceleration (PGA) and spectral acceleration at 0.2-, 0.5-, 1.0- and 2.0-s periods. The hazard is estimated for three return periods (72, 475 and 2475 years) corresponding to 50, 10 and 2% probability of exceedance in 50 years. The uniform hazard spectra for the cities of Cairo, Alexandria, Aswan and Nuwbia are constructed. The hazard maps show that the highest ground acceleration values are expected in the northeastern part of Egypt around the Gulf of Aqaba (PGA up to 0.4 g for return period 475 years) and in south Egypt around the city of Aswan (PGA up to 0.2 g for return period 475 years). The Western Desert of Egypt is characterized by the lowest level of hazard (PGA lower than 0.1 g for return period 475 years).

  20. Uncertainty on shallow landslide hazard assessment: from field data to hazard mapping

    Science.gov (United States)

    Trefolini, Emanuele; Tolo, Silvia; Patelli, Eduardo; Broggi, Matteo; Disperati, Leonardo; Le Tuan, Hai

    2015-04-01

    Shallow landsliding that involve Hillslope Deposits (HD), the surficial soil that cover the bedrock, is an important process of erosion, transport and deposition of sediment along hillslopes. Despite Shallow landslides generally mobilize relatively small volume of material, they represent the most hazardous factor in mountain regions due to their high velocity and the common absence of warning signs. Moreover, increasing urbanization and likely climate change make shallow landslides a source of widespread risk, therefore the interest of scientific community about this process grown in the last three decades. One of the main aims of research projects involved on this topic, is to perform robust shallow landslides hazard assessment for wide areas (regional assessment), in order to support sustainable spatial planning. Currently, three main methodologies may be implemented to assess regional shallow landslides hazard: expert evaluation, probabilistic (or data mining) methods and physical models based methods. The aim of this work is evaluate the uncertainty of shallow landslides hazard assessment based on physical models taking into account spatial variables such as: geotechnical and hydrogeologic parameters as well as hillslope morphometry. To achieve this goal a wide dataset of geotechnical properties (shear strength, permeability, depth and unit weight) of HD was gathered by integrating field survey, in situ and laboratory tests. This spatial database was collected from a study area of about 350 km2 including different bedrock lithotypes and geomorphological features. The uncertainty associated to each step of the hazard assessment process (e.g. field data collection, regionalization of site specific information and numerical modelling of hillslope stability) was carefully characterized. The most appropriate probability density function (PDF) was chosen for each numerical variable and we assessed the uncertainty propagation on HD strength parameters obtained by

  1. The comparison of proportional hazards and accelerated failure time models in analyzing the first birth interval survival data

    Science.gov (United States)

    Faruk, Alfensi

    2018-03-01

    Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.

  2. A decision model for the risk management of hazardous processes

    International Nuclear Information System (INIS)

    Holmberg, J.E.

    1997-03-01

    A decision model for risk management of hazardous processes as an optimisation problem of a point process is formulated in the study. In the approach, the decisions made by the management are divided into three categories: (1) planned process lifetime, (2) selection of the design and, (3) operational decisions. These three controlling methods play quite different roles in the practical risk management, which is also reflected in our approach. The optimisation of the process lifetime is related to the licensing problem of the process. It provides a boundary condition for a feasible utility function that is used as the actual objective function, i.e., maximizing the process lifetime utility. By design modifications, the management can affect the inherent accident hazard rate of the process. This is usually a discrete optimisation task. The study particularly concentrates upon the optimisation of the operational strategies given a certain design and licensing time. This is done by a dynamic risk model (marked point process model) representing the stochastic process of events observable or unobservable to the decision maker. An optimal long term control variable guiding the selection of operational alternatives in short term problems is studied. The optimisation problem is solved by the stochastic quasi-gradient procedure. The approach is illustrated by a case study. (23 refs.)

  3. Assessing Landslide Hazard Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Choobbasti, Asskar Janalizadeh; Barari, Amin

    2011-01-01

    failure" which is main concentration of the current research and "liquefaction failure". Shear failures along shear planes occur when the shear stress along the sliding surfaces exceed the effective shear strength. These slides have been referred to as landslide. An expert system based on artificial...... and factor of safety. It can be stated that the trained neural networks are capable of predicting the stability of slopes and safety factor of landslide hazard in study area with an acceptable level of confidence. Landslide hazard analysis and mapping can provide useful information for catastrophic loss...... reduction, and assist in the development of guidelines for sustainable land use planning. The analysis is used to identify the factors that are related to landslides and to predict the landslide hazard in the future based on such a relationship....

  4. Predicting future glacial lakes in Austria using different modelling approaches

    Science.gov (United States)

    Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers

  5. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  6. Using remotely sensed data and stochastic models to simulate realistic flood hazard footprints across the continental US

    Science.gov (United States)

    Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.

    2017-12-01

    Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in

  7. Dynamic modelling of a forward osmosis-nanofiltration integrated process for treating hazardous wastewater.

    Science.gov (United States)

    Pal, Parimal; Das, Pallabi; Chakrabortty, Sankha; Thakura, Ritwik

    2016-11-01

    Dynamic modelling and simulation of a nanofiltration-forward osmosis integrated complete system was done along with economic evaluation to pave the way for scale up of such a system for treating hazardous pharmaceutical wastes. The system operated in a closed loop not only protects surface water from the onslaught of hazardous industrial wastewater but also saves on cost of fresh water by turning wastewater recyclable at affordable price. The success of dynamic modelling in capturing the relevant transport phenomena is well reflected in high overall correlation coefficient value (R 2  > 0.98), low relative error (osmosis loop at a reasonably high flux of 56-58 l per square meter per hour.

  8. Measures to assess the prognostic ability of the stratified Cox proportional hazards model

    DEFF Research Database (Denmark)

    (Tybjaerg-Hansen, A.) The Fibrinogen Studies Collaboration.The Copenhagen City Heart Study; Tybjærg-Hansen, Anne

    2009-01-01

    Many measures have been proposed to summarize the prognostic ability of the Cox proportional hazards (CPH) survival model, although none is universally accepted for general use. By contrast, little work has been done to summarize the prognostic ability of the stratified CPH model; such measures...

  9. New Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, N.

    2012-12-01

    The Caucasus is a region of numerous natural hazards and ensuing disasters. Analysis of the losses due to past disasters indicates the those most catastrophic in the region have historically been due to strong earthquakes. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration because this parameter gives useful information for Seismic Hazard Assessment. Because of this, many peak ground acceleration attenuation relations have been developed by different authors. Besides, a few attenuation relations were developed for Caucasus region: Ambraseys et al. (1996,2005) which were based on entire European region and they were not focused locally on Caucasus Region; Smit et.al. (2000) that was based on a small amount of acceleration data that really is not enough. Since 2003 construction of Georgian Digital Seismic Network has started with the help of number of International organizations, Projects and Private companies. The works conducted involved scientific as well as organizational activities: Resolving technical problems concerning communication and data transmission. Thus, today we have a possibility to get real time data and make scientific research based on digital seismic data. Generally, ground motion and damage are influenced by the magnitude of the earthquake, the distance from the seismic source to site, the local ground conditions and the characteristics of buildings. Estimation of expected ground motion is a fundamental earthquake hazard assessment. This is the reason why this topic is emphasized in this study. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models are obtained by classical, statistical way, regression analysis. Also site ground conditions are considered because the same earthquake recorded at the same distance may cause different damage

  10. Conceptual geoinformation model of natural hazards risk assessment

    Science.gov (United States)

    Kulygin, Valerii

    2016-04-01

    Natural hazards are the major threat to safe interactions between nature and society. The assessment of the natural hazards impacts and their consequences is important in spatial planning and resource management. Today there is a challenge to advance our understanding of how socio-economical and climate changes will affect the frequency and magnitude of hydro-meteorological hazards and associated risks. However, the impacts from different types of natural hazards on various marine and coastal economic activities are not of the same type. In this study, the conceptual geomodel of risk assessment is presented to highlight the differentiation by the type of economic activities in extreme events risk assessment. The marine and coastal ecosystems are considered as the objects of management, on the one hand, and as the place of natural hazards' origin, on the other hand. One of the key elements in describing of such systems is the spatial characterization of their components. Assessment of ecosystem state is based on ecosystem indicators (indexes). They are used to identify the changes in time. The scenario approach is utilized to account for the spatio-temporal dynamics and uncertainty factors. Two types of scenarios are considered: scenarios of using ecosystem services by economic activities and scenarios of extreme events and related hazards. The reported study was funded by RFBR, according to the research project No. 16-35-60043 mol_a_dk.

  11. Ground Motion Prediction Equations for Western Saudi Arabia from a Reference Model

    Science.gov (United States)

    Kiuchi, R.; Mooney, W. D.; Mori, J. J.; Zahran, H. M.; Al-Raddadi, W.; Youssef, S.

    2017-12-01

    Western Saudi Arabia is surrounded by several active seismic zones such as the Red Sea and the Gulf of Aqaba where a destructive magnitude 7.3 event occurred in 1995. Over the last decade, the Saudi Geological Survey (SGS) has deployed a dense seismic network that has made it possible to monitor seismic activity more accurately. For example, the network has detected multiple seismic swarms beneath the volcanic fields in western Saudi Arabia. The most recent damaging event was a M5.7 earthquake that occurred in 2009 at Harrat Lunayyir. In terms of seismic hazard assessment, Zahran et al. (2015; 2016) presented a Probabilistic Seismic Hazard Assessment (PSHA) for western Saudi Arabia that was developed using published Ground Motion Prediction Equations (GMPEs) from areas outside of Saudi Arabia. In this study, we consider 41 earthquakes of M 3.0 - 5.4, recorded on 124 stations of the SGS network, to create a set of 442 peak ground acceleration (PGA) and peak ground velocity (PGV) records with a range of epicentral distances from 3 km to 400 km. We use the GMPE model BSSA14 (Boore et al., 2014) as a reference model to estimate our own best-fitting coefficients from a regression analysis using the events occurred in western Saudi Arabia. For epicentral distances less than 100 km, our best fitting model has different source scaling in comparison with the GMPE of BSSA14 adjusted for the California region. In addition, our model indicates that the peak amplitudes have less attenuation in western Saudi Arabia than in California.

  12. In silico prediction of toxicity of non-congeneric industrial chemicals using ensemble learning based modeling approaches

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com; Gupta, Shikha

    2014-03-15

    Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data, optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R{sup 2}) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R{sup 2} and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the

  13. In silico prediction of toxicity of non-congeneric industrial chemicals using ensemble learning based modeling approaches

    International Nuclear Information System (INIS)

    Singh, Kunwar P.; Gupta, Shikha

    2014-01-01

    Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data, optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R 2 ) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R 2 and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the

  14. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  15. A model integrating longshore and cross-shore processes for predicting long-term shoreline response to climate change

    Science.gov (United States)

    Vitousek, Sean; Barnard, Patrick; Limber, Patrick W.; Erikson, Li; Cole, Blake

    2017-01-01

    We present a shoreline change model for coastal hazard assessment and management planning. The model, CoSMoS-COAST (Coastal One-line Assimilated Simulation Tool), is a transect-based, one-line model that predicts short-term and long-term shoreline response to climate change in the 21st century. The proposed model represents a novel, modular synthesis of process-based models of coastline evolution due to longshore and cross-shore transport by waves and sea-level rise. Additionally, the model uses an extended Kalman filter for data assimilation of historical shoreline positions to improve estimates of model parameters and thereby improve confidence in long-term predictions. We apply CoSMoS-COAST to simulate sandy shoreline evolution along 500 km of coastline in Southern California, which hosts complex mixtures of beach settings variably backed by dunes, bluffs, cliffs, estuaries, river mouths, and urban infrastructure, providing applicability of the model to virtually any coastal setting. Aided by data assimilation, the model is able to reproduce the observed signal of seasonal shoreline change for the hindcast period of 1995-2010, showing excellent agreement between modeled and observed beach states. The skill of the model during the hindcast period improves confidence in the model's predictive capability when applied to the forecast period (2010-2100) driven by GCM-projected wave and sea-level conditions. Predictions of shoreline change with limited human intervention indicate that 31% to 67% of Southern California beaches may become completely eroded by 2100 under sea-level rise scenarios of 0.93 to 2.0 m.

  16. Flood Hazard Mapping using Hydraulic Model and GIS: A Case Study in Mandalay City, Myanmar

    Directory of Open Access Journals (Sweden)

    Kyu Kyu Sein

    2016-01-01

    Full Text Available This paper presents the use of flood frequency analysis integrating with 1D Hydraulic model (HECRAS and Geographic Information System (GIS to prepare flood hazard maps of different return periods in Ayeyarwady River at Mandalay City in Myanmar. Gumbel’s distribution was used to calculate the flood peak of different return periods, namely, 10 years, 20 years, 50 years, and 100 years. The flood peak from frequency analysis were input into HEC-RAS model to find the corresponding flood level and extents in the study area. The model results were used in integrating with ArcGIS to generate flood plain maps. Flood depths and extents have been identified through flood plain maps. Analysis of 100 years return period flood plain map indicated that 157.88 km2 with the percentage of 17.54% is likely to be inundated. The predicted flood depth ranges varies from greater than 0 to 24 m in the flood plains and on the river. The range between 3 to 5 m were identified in the urban area of Chanayetharzan, Patheingyi, and Amarapua Townships. The highest inundated area was 85 km2 in the Amarapura Township.

  17. High-Dimensional Additive Hazards Regression for Oral Squamous Cell Carcinoma Using Microarray Data: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Omid Hamidi

    2014-01-01

    Full Text Available Microarray technology results in high-dimensional and low-sample size data sets. Therefore, fitting sparse models is substantial because only a small number of influential genes can reliably be identified. A number of variable selection approaches have been proposed for high-dimensional time-to-event data based on Cox proportional hazards where censoring is present. The present study applied three sparse variable selection techniques of Lasso, smoothly clipped absolute deviation and the smooth integration of counting, and absolute deviation for gene expression survival time data using the additive risk model which is adopted when the absolute effects of multiple predictors on the hazard function are of interest. The performances of used techniques were evaluated by time dependent ROC curve and bootstrap .632+ prediction error curves. The selected genes by all methods were highly significant (P<0.001. The Lasso showed maximum median of area under ROC curve over time (0.95 and smoothly clipped absolute deviation showed the lowest prediction error (0.105. It was observed that the selected genes by all methods improved the prediction of purely clinical model indicating the valuable information containing in the microarray features. So it was concluded that used approaches can satisfactorily predict survival based on selected gene expression measurements.

  18. Use of raster-based data layers to model spatial variation of seismotectonic data in probabilistic seismic hazard assessment

    Science.gov (United States)

    Zolfaghari, Mohammad R.

    2009-07-01

    Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.

  19. The unconvincing product - Consumer versus expert hazard identification: A mental models study of novel foods

    DEFF Research Database (Denmark)

    Hagemann, Kit; Scholderer, Joachim

    and experts understanding of benefits and risks associated with three Novel foods (a potato, rice and functional food ingredients) using a relatively new methodology for the study of risk perception called Mental models. Mental models focus on the way people conceptualise hazardous processes and allows...... researchers to pit a normative analysis (expert mental models) against a descriptive analysis (consumer mental models). Expert models were elicited by means of a three-wave Delphi procedure from altogether 24 international experts and consumers models from in-dept interviews with Danish consumers. The results...... revealed that consumers´ and experts' mental models differed in connection to scope. Experts focused on the types of hazards for which risk assessments can be conducted under current legal frameworks whereas consumers were concerned about issues that lay outside the scope of current legislation. Experts...

  20. ASCHFLOW - A dynamic landslide run-out model for medium scale hazard analysis

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; van Asch, T.W.J.; van Westen, C.J.; Kappes, M.

    2016-01-01

    Roč. 3, 12 December (2016), č. článku 29. E-ISSN 2197-8670 Institutional support: RVO:67985891 Keywords : landslides * run-out models * medium scale hazard analysis * quantitative risk assessment Subject RIV: DE - Earth Magnetism, Geodesy, Geography

  1. Sensitivity of seismic hazard to various parameters and correlations for peak ground acceleration

    International Nuclear Information System (INIS)

    Ghosh, A.K.; Kushwaha, H.S.

    1998-10-01

    The design basis ground motion of a site is generally specified in terms of the peak ground acceleration (PGA), ground motion response spectrum and time history. The probability of exceedence of the design level PGA is one of the factors to quantify the seismic risk at the given site. The present paper extends the work of Cornell to consider an aerial source model and a more general form of the correlation for PGA to evaluate the seismic hazard. It is further recognised that the predicted seismic hazard can vary with various parameters involved. Numerical results have been presented to show this variability. These results will help to determine the seismic hazard at a given site and the associated uncertainties. (author)

  2. Establishment and validation of a predictive nomogram model for non-small cell lung cancer patients with chronic hepatitis B viral infection.

    Science.gov (United States)

    Chen, Shulin; Lai, Yanzhen; He, Zhengqiang; Li, Jianpei; He, Xia; Shen, Rui; Ding, Qiuying; Chen, Hao; Peng, Songguo; Liu, Wanli

    2018-05-04

    This study aimed to establish an effective predictive nomogram for non-small cell lung cancer (NSCLC) patients with chronic hepatitis B viral (HBV) infection. The nomogram was based on a retrospective study of 230 NSCLC patients with chronic HBV infection. The predictive accuracy and discriminative ability of the nomogram were determined by a concordance index (C-index), calibration plot and decision curve analysis and were compared with the current tumor, node, and metastasis (TNM) staging system. Independent factors derived from Kaplan-Meier analysis of the primary cohort to predict overall survival (OS) were all assembled into a Cox proportional hazards regression model to build the nomogram model. The final model included age, tumor size, TNM stage, treatment, apolipoprotein A-I, apolipoprotein B, glutamyl transpeptidase and lactate dehydrogenase. The calibration curve for the probability of OS showed that the nomogram-based predictions were in good agreement with the actual observations. The C-index of the model for predicting OS had a superior discrimination power compared with the TNM staging system [0.780 (95% CI 0.733-0.827) vs. 0.693 (95% CI 0.640-0.746), P  20.0). The proposed nomogram model resulted in more accurate prognostic prediction for NSCLC patients with chronic HBV infection.

  3. A prediction model for treatment decisions in high-grade extremity soft-tissue sarcomas: Personalised sarcoma care (PERSARC).

    Science.gov (United States)

    van Praag, Veroniek M; Rueten-Budde, Anja J; Jeys, Lee M; Laitinen, Minna K; Pollock, Rob; Aston, Will; van der Hage, Jos A; Dijkstra, P D Sander; Ferguson, Peter C; Griffin, Anthony M; Willeumier, Julie J; Wunder, Jay S; van de Sande, Michiel A J; Fiocco, Marta

    2017-09-01

    To support shared decision-making, we developed the first prediction model for patients with primary soft-tissue sarcomas of the extremities (ESTS) which takes into account treatment modalities, including applied radiotherapy (RT) and achieved surgical margins. The PERsonalised SARcoma Care (PERSARC) model, predicts overall survival (OS) and the probability of local recurrence (LR) at 3, 5 and 10 years. Development and validation, by internal validation, of the PERSARC prediction model. The cohort used to develop the model consists of 766 ESTS patients who underwent surgery, between 2000 and 2014, at five specialised international sarcoma centres. To assess the effect of prognostic factors on OS and on the cumulative incidence of LR (CILR), a multivariate Cox proportional hazard regression and the Fine and Gray model were estimated. Predictive performance was investigated by using internal cross validation (CV) and calibration. The discriminative ability of the model was determined with the C-index. Multivariate Cox regression revealed that age and tumour size had a significant effect on OS. More importantly, patients who received RT showed better outcomes, in terms of OS and CILR, than those treated with surgery alone. Internal validation of the model showed good calibration and discrimination, with a C-index of 0.677 and 0.696 for OS and CILR, respectively. The PERSARC model is the first to incorporate known clinical risk factors with the use of different treatments and surgical outcome measures. The developed model is internally validated to provide a reliable prediction of post-operative OS and CILR for patients with primary high-grade ESTS. LEVEL OF SIGNIFICANCE: level III. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Functional form diagnostics for Cox's proportional hazards model.

    Science.gov (United States)

    León, Larry F; Tsai, Chih-Ling

    2004-03-01

    We propose a new type of residual and an easily computed functional form test for the Cox proportional hazards model. The proposed test is a modification of the omnibus test for testing the overall fit of a parametric regression model, developed by Stute, González Manteiga, and Presedo Quindimil (1998, Journal of the American Statistical Association93, 141-149), and is based on what we call censoring consistent residuals. In addition, we develop residual plots that can be used to identify the correct functional forms of covariates. We compare our test with the functional form test of Lin, Wei, and Ying (1993, Biometrika80, 557-572) in a simulation study. The practical application of the proposed residuals and functional form test is illustrated using both a simulated data set and a real data set.

  5. Multi Hazard Assessment: The Azores Archipelagos (PT) case

    Science.gov (United States)

    Aifantopoulou, Dorothea; Boni, Giorgio; Cenci, Luca; Kaskara, Maria; Kontoes, Haris; Papoutsis, Ioannis; Paralikidis, Sideris; Psichogyiou, Christina; Solomos, Stavros; Squicciarino, Giuseppe; Tsouni, Alexia; Xerekakis, Themos

    2016-04-01

    ) and earthquake (475 years return period) was used. Topography, lithology, soil moisture and LU/LC were also accounted for. Soil erosion risk was assessed through the empirical model RUSLE (Renard et al. 1991b). Rainfall erosivity, topography and vegetation cover are the main parameters which were used for predicting the proneness to soil loss. Expected, maximum tsunami wave heights were estimated for a specific earthquake scenario at designated forecast points along the coasts. Deformation at the source was calculated by utilizing the Okada code (Okada, 1985). Tsunami waves' generation and propagation is based on the SWAN model (JRC/IPSC modification). To estimate the wave height (forecast points) the Green's Law function was used (JRC Tsunami Analysis Tool). Storm tracks' historical data indicate a return period of 17 /41 years for H1 /H2 hurricane categories respectively. NOAA WAVEWATCH III model hindcast reanalysis was used to estimate the maximum significant wave height (wind and swell) along the coastline during two major storms. The associated storm-surge risk assessment accounted also for the coastline morphology. Seven empirical (independent) indicators were used to express the erosion susceptibility of the coasts. Each indicator is evaluated according to a semi?quantitative score that represents low, medium and high level of erosion risk or impact. The estimation of the coastal erosion hazard was derived through aggregating the indicators in a grid scale.

  6. Seismic hazard in the Intermountain West

    Science.gov (United States)

    Haller, Kathleen; Moschetti, Morgan P.; Mueller, Charles; Rezaeian, Sanaz; Petersen, Mark D.; Zeng, Yuehua

    2015-01-01

    The 2014 national seismic-hazard model for the conterminous United States incorporates new scientific results and important model adjustments. The current model includes updates to the historical catalog, which is spatially smoothed using both fixed-length and adaptive-length smoothing kernels. Fault-source characterization improved by adding faults, revising rates of activity, and incorporating new results from combined inversions of geologic and geodetic data. The update also includes a new suite of published ground motion models. Changes in probabilistic ground motion are generally less than 10% in most of the Intermountain West compared to the prior assessment, and ground-motion hazard in four Intermountain West cities illustrates the range and magnitude of change in the region. Seismic hazard at reference sites in Boise and Reno increased as much as 10%, whereas hazard in Salt Lake City decreased 5–6%. The largest change was in Las Vegas, where hazard increased 32–35%.

  7. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  8. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  9. Development of Response Spectral Ground Motion Prediction Equations from Empirical Models for Fourier Spectra and Duration of Ground Motion

    Science.gov (United States)

    Bora, S. S.; Scherbaum, F.; Kuehn, N. M.; Stafford, P.; Edwards, B.

    2014-12-01

    In a probabilistic seismic hazard assessment (PSHA) framework, it still remains a challenge to adjust ground motion prediction equations (GMPEs) for application in different seismological environments. In this context, this study presents a complete framework for the development of a response spectral GMPE easily adjustable to different seismological conditions; and which does not suffer from the technical problems associated with the adjustment in response spectral domain. Essentially, the approach consists of an empirical FAS (Fourier Amplitude Spectrum) model and a duration model for ground motion which are combined within the random vibration theory (RVT) framework to obtain the full response spectral ordinates. Additionally, FAS corresponding to individual acceleration records are extrapolated beyond the frequency range defined by the data using the stochastic FAS model, obtained by inversion as described in Edwards & Faeh, (2013). To that end, an empirical model for a duration, which is tuned to optimize the fit between RVT based and observed response spectral ordinate, at each oscillator frequency is derived. Although, the main motive of the presented approach was to address the adjustability issues of response spectral GMPEs; comparison, of median predicted response spectra with the other regional models indicate that presented approach can also be used as a stand-alone model. Besides that, a significantly lower aleatory variability (σbrands it to a potentially viable alternative to the classical regression (on response spectral ordinates) based GMPEs for seismic hazard studies in the near future. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, Middle East and the Mediterranean region.

  10. Underground disposal of hazardous waste - state of the art and R and D projects

    International Nuclear Information System (INIS)

    Pitterich, H.; Brueckner, C.

    1998-01-01

    The project management group Entsorgung (PTE) coordinates R and D activities on deep geological disposal of hazardous waste besides other activities in the field of nuclear disposal. R and D projects aim at the improvement of tools used to predict the long-term behaviour of underground disposal facilities and the threat for man and environment associated with these facilities. The current German situation on deep geological disposal of hazardous waste is described and some results from the fields waste-anaylsis, geochemical modelling and geotechnical barriers for the sealing of waste disposal sites are presented. (orig.)

  11. Development of a Probabilistic Tornado Wind Hazard Model for the Continental United States Volume I: Main Report

    International Nuclear Information System (INIS)

    Boissonnade, A; Hossain, Q; Kimball, J

    2000-01-01

    Since the mid-l980's, assessment of the wind and tornado risks at the Department of Energy (DOE) high and moderate hazard facilities has been based on the straight wind/tornado hazard curves given in UCRL-53526 (Coats, 1985). These curves were developed using a methodology that utilized a model, developed by McDonald, for severe winds at sub-tornado wind speeds and a separate model, developed by Fujita, for tornado wind speeds. For DOE sites not covered in UCRL-53526, wind and tornado hazard assessments are based on the criteria outlined in DOE-STD-1023-95 (DOE, 1996), utilizing the methodology in UCRL-53526; Subsequent to the publication of UCRL53526, in a study sponsored by the Nuclear Regulatory Commission (NRC), the Pacific Northwest Laboratory developed tornado wind hazard curves for the contiguous United States, NUREG/CR-4461 (Ramsdell, 1986). Because of the different modeling assumptions and underlying data used to develop the tornado wind information, the wind speeds at specified exceedance levels, at a given location, based on the methodology in UCRL-53526, are different than those based on the methodology in NUREG/CR-4461. In 1997, Lawrence Livermore National Laboratory (LLNL) was funded by the DOE to review the current methodologies for characterizing tornado wind hazards and to develop a state-of-the-art wind/tornado characterization methodology based on probabilistic hazard assessment techniques and current historical wind data. This report describes the process of developing the methodology and the database of relevant tornado information needed to implement the methodology. It also presents the tornado wind hazard curves obtained from the application of the method to DOE sites throughout the contiguous United States

  12. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. A combined morphometric, sedimentary, GIS and modelling analysis of flooding and debris flow hazard on a composite alluvial fan, Caveside, Tasmania

    Science.gov (United States)

    Kain, Claire L.; Rigby, Edward H.; Mazengarb, Colin

    2018-02-01

    Two episodes of intense flooding and sediment movement occurred in the Westmorland Stream alluvial system near Caveside, Australia in January 2011 and June 2016. The events were investigated in order to better understand the drivers and functioning of this composite alluvial system on a larger scale, so as to provide awareness of the potential hazard from future flood and debris flow events. A novel combination of methods was employed, including field surveys, catchment morphometry, GIS mapping from LiDAR and aerial imagery, and hydraulic modelling using RiverFlow-2D software. Both events were initiated by extreme rainfall events (events on the farmland appeared similar; however, there were differences in sediment source and transport processes that have implications for understanding recurrence probabilities. A debris flow was a key driver in the 2011 event, by eroding the stream channel in the forested watershed and delivering a large volume of sediment downstream to the alluvial fan. In contrast, modelled flooding velocities suggest the impacts of the 2016 event were the result of an extended period of extreme stream flooding and consequent erosion of alluvium directly above the current fan apex. The morphometry of the catchment is better aligned with values from fluvially dominated fans found elsewhere, which suggests that flooding represents a more frequent future risk than debris flows. These findings have wider implications for the estimation of debris flow and flood hazard on alluvial fans in Tasmania and elsewhere, as well as further demonstrating the capacity of combined hydraulic modelling and geomorphologic investigation as a predictive tool to inform hazard management practices in environments affected by flooding and sediment movement.

  14. Collision prediction models using multivariate Poisson-lognormal regression.

    Science.gov (United States)

    El-Basyouny, Karim; Sayed, Tarek

    2009-07-01

    This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models.

  15. Probabilistic seismic hazard assessment. Gentilly 2

    International Nuclear Information System (INIS)

    1996-03-01

    Results of this probabilistic seismic hazard assessment were determined using a suite of conservative assumptions. The intent of this study was to perform a limited hazard assessment that incorporated a range of technically defensible input parameters. To best achieve this goal, input selected for the hazard assessment tended to be conservative with respect to selection of attenuation modes, and seismicity parameters. Seismic hazard estimates at Gentilly 2 were most affected by selection of the attenuation model. Alternative definitions of seismic source zones had a relatively small impact on seismic hazard. A St. Lawrence Rift model including a maximum magnitude of 7.2 m b in the zone containing the site had little effect on the hazard estimate relative to other seismic source zonation models. Mean annual probabilities of exceeding the design peak ground acceleration, and the design response spectrum for the Gentilly 2 site were computed to lie in the range of 0.001 to 0.0001. This hazard result falls well within the range determined to be acceptable for nuclear reactor sites located throughout the eastern United States. (author) 34 refs., 6 tabs., 28 figs

  16. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  17. Regional Landslide Hazard Assessment Considering Potential Climate Change

    Science.gov (United States)

    Almeida, S.; Holcombe, E.; Pianosi, F.; Wagener, T.

    2016-12-01

    Landslides have many negative economic and societal impacts, including the potential for significant loss of life and damage to infrastructure. These risks are likely to be exacerbated in the future by a combination of climatic and socio-economic factors. Climate change, for example, is expected to increase the occurrence of rainfall-triggered landslides, because a warmer atmosphere tends to produce more high intensity rainfall events. Prediction of future changes in rainfall, however, is subject to high levels of uncertainty, making it challenging for decision-makers to identify the areas and populations that are most vulnerable to landslide hazards. In this study, we demonstrate how a physically-based model - the Combined Hydrology and Stability Model (CHASM) - can be used together with Global Sensitivity Analysis (GSA) to explore the underlying factors controlling the spatial distribution of landslide risks across a regional landscape, while also accounting for deep uncertainty around potential future rainfall triggers. We demonstrate how GSA can be used to analyse CHASM which in turn represents the spatial variability of hillslope characteristics in the study region, while accounting for other uncertainties. Results are presented in the form of landslide hazard maps, utilising high-resolution digital elevation datasets for a case study in St Lucia in the Caribbean. Our findings about spatial landslide hazard drivers have important implications for data collection approaches and for long-term decision-making about land management practices.

  18. [Application of occupational hazard risk index model in occupational health risk assessment in a decorative coating manufacturing enterprises].

    Science.gov (United States)

    He, P L; Zhao, C X; Dong, Q Y; Hao, S B; Xu, P; Zhang, J; Li, J G

    2018-01-20

    Objective: To evaluate the occupational health risk of decorative coating manufacturing enterprises and to explore the applicability of occupational hazard risk index model in the health risk assessment, so as to provide basis for the health management of enterprises. Methods: A decorative coating manufacturing enterprise in Hebei Province was chosen as research object, following the types of occupational hazards and contact patterns, the occupational hazard risk index model was used to evaluate occupational health risk factors of occupational hazards in the key positions of the decorative coating manufacturing enterprise, and measured with workplace test results and occupational health examination. Results: The positions of oily painters, water-borne painters, filling workers and packers who contacted noise were moderate harm. And positions of color workers who contacted chromic acid salts, oily painters who contacted butyl acetate were mild harm. Other positions were harmless. The abnormal rate of contacting noise in physical examination results was 6.25%, and the abnormality was not checked by other risk factors. Conclusion: The occupational hazard risk index model can be used in the occupational health risk assessment of decorative coating manufacturing enterprises, and noise was the key harzard among occupational harzards in this enterprise.

  19. Are seismic hazard assessment errors and earthquake surprises unavoidable?

    Science.gov (United States)

    Kossobokov, Vladimir

    2013-04-01

    Why earthquake occurrences bring us so many surprises? The answer seems evident if we review the relationships that are commonly used to assess seismic hazard. The time-span of physically reliable Seismic History is yet a small portion of a rupture recurrence cycle at an earthquake-prone site, which makes premature any kind of reliable probabilistic statements about narrowly localized seismic hazard. Moreover, seismic evidences accumulated to-date demonstrate clearly that most of the empirical relations commonly accepted in the early history of instrumental seismology can be proved erroneous when testing statistical significance is applied. Seismic events, including mega-earthquakes, cluster displaying behaviors that are far from independent or periodic. Their distribution in space is possibly fractal, definitely, far from uniform even in a single segment of a fault zone. Such a situation contradicts generally accepted assumptions used for analytically tractable or computer simulations and complicates design of reliable methodologies for realistic earthquake hazard assessment, as well as search and definition of precursory behaviors to be used for forecast/prediction purposes. As a result, the conclusions drawn from such simulations and analyses can MISLEAD TO SCIENTIFICALLY GROUNDLESS APPLICATION, which is unwise and extremely dangerous in assessing expected societal risks and losses. For example, a systematic comparison of the GSHAP peak ground acceleration estimates with those related to actual strong earthquakes, unfortunately, discloses gross inadequacy of this "probabilistic" product, which appears UNACCEPTABLE FOR ANY KIND OF RESPONSIBLE SEISMIC RISK EVALUATION AND KNOWLEDGEABLE DISASTER PREVENTION. The self-evident shortcomings and failures of GSHAP appeals to all earthquake scientists and engineers for an urgent revision of the global seismic hazard maps from the first principles including background methodologies involved, such that there becomes: (a) a

  20. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  1. Report 2: Guidance document on practices to model and implement external flooding hazards in extended PSA

    International Nuclear Information System (INIS)

    Rebour, V.; Georgescu, G.; Leteinturier, D.; Raimond, E.; La Rovere, S.; Bernadara, P.; Vasseur, D.; Brinkman, H.; Groudev, P.; Ivanov, I.; Turschmann, M.; Sperbeck, S.; Potempski, S.; Hirata, K.; Kumar, Manorma

    2016-01-01

    This report provides a review of existing practices to model and implement external flooding hazards in existing level 1 PSA. The objective is to identify good practices on the modelling of initiating events (internal and external hazards) with a perspective of development of extended PSA and implementation of external events modelling in extended L1 PSA, its limitations/difficulties as far as possible. The views presented in this report are based on the ASAMPSA-E partners' experience and available publications. The report includes discussions on the following issues: - how to structure a L1 PSA for external flooding events, - information needed from geosciences in terms of hazards modelling and to build relevant modelling for PSA, - how to define and model the impact of each flooding event on SSCs with distinction between the flooding protective structures and devices and the effect of protection failures on other SSCs, - how to identify and model the common cause failures in one reactor or between several reactors, - how to apply HRA methodology for external flooding events, - how to credit additional emergency response (post-Fukushima measures like mobile equipment), - how to address the specific issues of L2 PSA, - how to perform and present risk quantification. (authors)

  2. Hazard Ranking Method for Populations Exposed to Arsenic in Private Water Supplies: Relation to Bedrock Geology.

    Science.gov (United States)

    Crabbe, Helen; Fletcher, Tony; Close, Rebecca; Watts, Michael J; Ander, E Louise; Smedley, Pauline L; Verlander, Neville Q; Gregory, Martin; Middleton, Daniel R S; Polya, David A; Studden, Mike; Leonardi, Giovanni S

    2017-12-01

    Approximately one million people in the UK are served by private water supplies (PWS) where main municipal water supply system connection is not practical or where PWS is the preferred option. Chronic exposure to contaminants in PWS may have adverse effects on health. South West England is an area with elevated arsenic concentrations in groundwater and over 9000 domestic dwellings here are supplied by PWS. There remains uncertainty as to the extent of the population exposed to arsenic (As), and the factors predicting such exposure. We describe a hazard assessment model based on simplified geology with the potential to predict exposure to As in PWS. Households with a recorded PWS in Cornwall were recruited to take part in a water sampling programme from 2011 to 2013. Bedrock geologies were aggregated and classified into nine Simplified Bedrock Geological Categories (SBGC), plus a cross-cutting "mineralized" area. PWS were sampled by random selection within SBGCs and some 508 households volunteered for the study. Transformations of the data were explored to estimate the distribution of As concentrations for PWS by SBGC. Using the distribution per SBGC, we predict the proportion of dwellings that would be affected by high concentrations and rank the geologies according to hazard. Within most SBGCs, As concentrations were found to have log-normal distributions. Across these areas, the proportion of dwellings predicted to have drinking water over the prescribed concentration value (PCV) for As ranged from 0% to 20%. From these results, a pilot predictive model was developed calculating the proportion of PWS above the PCV for As and hazard ranking supports local decision making and prioritization. With further development and testing, this can help local authorities predict the number of dwellings that might fail the PCV for As, based on bedrock geology. The model presented here for Cornwall could be applied in areas with similar geologies. Application of the method

  3. Independent screening for single-index hazard rate models with ultrahigh dimensional features

    DEFF Research Database (Denmark)

    Gorst-Rasmussen, Anders; Scheike, Thomas

    2013-01-01

    can be viewed as the natural survival equivalent of correlation screening. We state conditions under which the method admits the sure screening property within a class of single-index hazard rate models with ultrahigh dimensional features and describe the generally detrimental effect of censoring...

  4. Global Drought Hazard Frequency and Distribution

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Drought Hazard Frequency and Distribution is a 2.5 minute grid based upon the International Research Institute for Climate Prediction's (IRI) Weighted Anomaly...

  5. Assessing Surface Fuel Hazard in Coastal Conifer Forests through the Use of LiDAR Remote Sensing

    Science.gov (United States)

    Koulas, Christos

    The research problem that this thesis seeks to examine is a method of predicting conventional fire hazards using data drawn from specific regions, namely the Sooke and Goldstream watershed regions in coastal British Columbia. This thesis investigates whether LiDAR data can be used to describe conventional forest stand fire hazard classes. Three objectives guided this thesis: to discuss the variables associated with fire hazard, specifically the distribution and makeup of fuel; to examine the relationship between derived LiDAR biometrics and forest attributes related to hazard assessment factors defined by the Capitol Regional District (CRD); and to assess the viability of the LiDAR biometric decision tree in the CRD based on current frameworks for use. The research method uses quantitative datasets to assess the optimal generalization of these types of fire hazard data through discriminant analysis. Findings illustrate significant LiDAR-derived data limitations, and reflect the literature in that flawed field application of data modelling techniques has led to a disconnect between the ways in which fire hazard models have been intended to be used by scholars and the ways in which they are used by those tasked with prevention of forest fires. It can be concluded that a significant trade-off exists between computational requirements for wildfire simulation models and the algorithms commonly used by field teams to apply these models with remote sensing data, and that CRD forest management practices would need to change to incorporate a decision tree model in order to decrease risk.

  6. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  7. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    Science.gov (United States)

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  8. Seismic hazard in the eastern United States

    Science.gov (United States)

    Mueller, Charles; Boyd, Oliver; Petersen, Mark D.; Moschetti, Morgan P.; Rezaeian, Sanaz; Shumway, Allison

    2015-01-01

    The U.S. Geological Survey seismic hazard maps for the central and eastern United States were updated in 2014. We analyze results and changes for the eastern part of the region. Ratio maps are presented, along with tables of ground motions and deaggregations for selected cities. The Charleston fault model was revised, and a new fault source for Charlevoix was added. Background seismicity sources utilized an updated catalog, revised completeness and recurrence models, and a new adaptive smoothing procedure. Maximum-magnitude models and ground motion models were also updated. Broad, regional hazard reductions of 5%–20% are mostly attributed to new ground motion models with stronger near-source attenuation. The revised Charleston fault geometry redistributes local hazard, and the new Charlevoix source increases hazard in northern New England. Strong increases in mid- to high-frequency hazard at some locations—for example, southern New Hampshire, central Virginia, and eastern Tennessee—are attributed to updated catalogs and/or smoothing.

  9. An Uncertain Wage Contract Model with Adverse Selection and Moral Hazard

    Directory of Open Access Journals (Sweden)

    Xiulan Wang

    2014-01-01

    it can be characterized as an uncertain variable. Moreover, the employee's effort is unobservable to the employer, and the employee can select her effort level to maximize her utility. Thus, an uncertain wage contract model with adverse selection and moral hazard is established to maximize the employer's expected profit. And the model analysis mainly focuses on the equivalent form of the proposed wage contract model and the optimal solution to this form. The optimal solution indicates that both the employee's effort level and the wage increase with the employee's ability. Lastly, a numerical example is given to illustrate the effectiveness of the proposed model.

  10. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  11. Major hazards onshore and offshore

    International Nuclear Information System (INIS)

    1992-01-01

    This symposium continues the tradition of bringing together papers on a topic of current interest and importance in terms of process safety - in this case, Major Hazards Onshore and Offshore. Lord Cullen in his report on the Piper Alpha disaster has, in effect, suggested that the experience gained in the control of major hazards onshore during the 1980s should be applied to improve safety offshore during the 1990s. This major three-day symposium reviews what has been learned so far with regard to major hazards and considers its present and future applications both onshore and offshore. The topics covered in the programme are wide ranging and deal with all aspects of legislation, the application of regulations, techniques for evaluating hazards and prescribing safety measures in design, construction and operation, the importance of the human factors, and recent technical developments in protective measures, relief venting and predicting the consequences of fires and explosions. (author)

  12. An artificial neural network prediction model of congenital heart disease based on risk factors: A hospital-based case-control study.

    Science.gov (United States)

    Li, Huixia; Luo, Miyang; Zheng, Jianfei; Luo, Jiayou; Zeng, Rong; Feng, Na; Du, Qiyun; Fang, Junqun

    2017-02-01

    An artificial neural network (ANN) model was developed to predict the risks of congenital heart disease (CHD) in pregnant women.This hospital-based case-control study involved 119 CHD cases and 239 controls all recruited from birth defect surveillance hospitals in Hunan Province between July 2013 and June 2014. All subjects were interviewed face-to-face to fill in a questionnaire that covered 36 CHD-related variables. The 358 subjects were randomly divided into a training set and a testing set at the ratio of 85:15. The training set was used to identify the significant predictors of CHD by univariate logistic regression analyses and develop a standard feed-forward back-propagation neural network (BPNN) model for the prediction of CHD. The testing set was used to test and evaluate the performance of the ANN model. Univariate logistic regression analyses were performed on SPSS 18.0. The ANN models were developed on Matlab 7.1.The univariate logistic regression identified 15 predictors that were significantly associated with CHD, including education level (odds ratio  = 0.55), gravidity (1.95), parity (2.01), history of abnormal reproduction (2.49), family history of CHD (5.23), maternal chronic disease (4.19), maternal upper respiratory tract infection (2.08), environmental pollution around maternal dwelling place (3.63), maternal exposure to occupational hazards (3.53), maternal mental stress (2.48), paternal chronic disease (4.87), paternal exposure to occupational hazards (2.51), intake of vegetable/fruit (0.45), intake of fish/shrimp/meat/egg (0.59), and intake of milk/soymilk (0.55). After many trials, we selected a 3-layer BPNN model with 15, 12, and 1 neuron in the input, hidden, and output layers, respectively, as the best prediction model. The prediction model has accuracies of 0.91 and 0.86 on the training and testing sets, respectively. The sensitivity, specificity, and Yuden Index on the testing set (training set) are 0.78 (0.83), 0.90 (0.95), and 0

  13. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  14. A Dynamic Hydrology-Critical Zone Framework for Rainfall-triggered Landslide Hazard Prediction

    Science.gov (United States)

    Dialynas, Y. G.; Foufoula-Georgiou, E.; Dietrich, W. E.; Bras, R. L.

    2017-12-01

    Watershed-scale coupled hydrologic-stability models are still in their early stages, and are characterized by important limitations: (a) either they assume steady-state or quasi-dynamic watershed hydrology, or (b) they simulate landslide occurrence based on a simple one-dimensional stability criterion. Here we develop a three-dimensional landslide prediction framework, based on a coupled hydrologic-slope stability model and incorporation of the influence of deep critical zone processes (i.e., flow through weathered bedrock and exfiltration to the colluvium) for more accurate prediction of the timing, location, and extent of landslides. Specifically, a watershed-scale slope stability model that systematically accounts for the contribution of driving and resisting forces in three-dimensional hillslope segments was coupled with a spatially-explicit and physically-based hydrologic model. The landslide prediction framework considers critical zone processes and structure, and explicitly accounts for the spatial heterogeneity of surface and subsurface properties that control slope stability, including soil and weathered bedrock hydrological and mechanical characteristics, vegetation, and slope morphology. To test performance, the model was applied in landslide-prone sites in the US, the hydrology of which has been extensively studied. Results showed that both rainfall infiltration in the soil and groundwater exfiltration exert a strong control on the timing and magnitude of landslide occurrence. We demonstrate the extent to which three-dimensional slope destabilizing factors, which are modulated by dynamic hydrologic conditions in the soil-bedrock column, control landslide initiation at the watershed scale.

  15. Identification and delineation of areas flood hazard using high accuracy of DEM data

    Science.gov (United States)

    Riadi, B.; Barus, B.; Widiatmaka; Yanuar, M. J. P.; Pramudya, B.

    2018-05-01

    Flood incidents that often occur in Karawang regency need to be mitigated. These expectations exist on technologies that can predict, anticipate and reduce disaster risks. Flood modeling techniques using Digital Elevation Model (DEM) data can be applied in mitigation activities. High accuracy DEM data used in modeling, will result in better flooding flood models. The result of high accuracy DEM data processing will yield information about surface morphology which can be used to identify indication of flood hazard area. The purpose of this study was to identify and describe flood hazard areas by identifying wetland areas using DEM data and Landsat-8 images. TerraSAR-X high-resolution data is used to detect wetlands from landscapes, while land cover is identified by Landsat image data. The Topography Wetness Index (TWI) method is used to detect and identify wetland areas with basic DEM data, while for land cover analysis using Tasseled Cap Transformation (TCT) method. The result of TWI modeling yields information about potential land of flood. Overlay TWI map with land cover map that produces information that in Karawang regency the most vulnerable areas occur flooding in rice fields. The spatial accuracy of the flood hazard area in this study was 87%.

  16. Living donor risk model for predicting kidney allograft and patient survival in an emerging economy.

    Science.gov (United States)

    Zafar, Mirza Naqi; Wong, Germaine; Aziz, Tahir; Abbas, Khawar; Adibul Hasan Rizvi, S

    2018-03-01

    Living donor kidney is the main source of donor organs in low to middle income countries. We aimed to develop a living donor risk model that predicts graft and patient survival in an emerging economy. We used data from the Sindh Institute of Urology and Transplantation (SIUT) database (n = 2283 recipients and n = 2283 living kidney donors, transplanted between 1993 and 2009) and conducted Cox proportional hazard analyses to develop a composite score that predicts graft and patient survivals. Donor factors age, creatinine clearance, nephron dose (estimated by donor/recipient body weight ratio) and human leukocyte antigen (HLA) match were included in the living donor risk model. The adjusted hazard ratios (HRs) for graft failures among those who received a kidney with living donor scores (reference to donor score of zero) of 1, 2, 3 and 4 were 1.14 (95%CI: 0.94-1.39), 1.24 (95%CI:1.03-1.49), 1.25 (95%CI:1.03-1.51) and 1.36 (95%CI:1.08-1.72) (P-value for trend =0.05). Similar findings were observed for patient survival. Similar to findings in high income countries, our study suggests that donor characteristics such as age, nephron dose, creatinine clearance and HLA match are important factors that determine the long-term patient and graft survival in low income countries. However, other crucial but undefined factors may play a role in determining the overall risk of graft failure and mortality in living kidney donor transplant recipients. © 2016 Asian Pacific Society of Nephrology.

  17. An estimating equation for parametric shared frailty models with marginal additive hazards

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Martinussen, Torben

    2004-01-01

    Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In some contexts, however, it may be more reasonable...

  18. Work-principle model for predicting toxic fumes of nonideal explosives

    Energy Technology Data Exchange (ETDEWEB)

    Wieland, Michael S. [National Institute of Occupational Safety and Health, Pittsburgh Research Center, P.O. Box 18070, Pittsburgh, PA 15236-0070 (United States)

    2004-08-01

    The work-principle from thermodynamics was used to formulate a model for predicting toxic fumes from mining explosives in underground chamber tests, where rapid turbulent combustion within the surrounding air noticeably changes the resulting concentrations. Two model constants were required to help characterize the reaction zone undergoing rapid chemical transformations in conjunction with heat transfer and work output: a stoichiometry mixing fraction and a reaction-quenching temperature. Rudimentary theory with an unsteady uniform concentration gradient was taken to characterize the combustion zone, yielding 75% for the mixing fraction. Four quenching temperature trends were resolved and compared to test results of ammonium nitrate compositions with different fuel-oil percentages (ANFO). The quenching temperature 2345 K was the optimum choice for fitting the two major components of fume toxicity: carbon monoxide (CO) and total nitrogen oxides (NO{sub X}). The resulting two-constant model was used to generate comparisons for test results of ANFO compositions with additives. Though respectable fits were usually found, charge formulations which reacted weakly could not be resolved numerically. The work-principle model yields toxic concentrations for a range of charge formulations, making it a useful tool for investigating the potential hazard of released fumes and reducing the risk of unwanted incidents. (Abstract Copyright [2004], Wiley Periodicals, Inc.)

  19. Landslide Hazard Mapping in Rwanda Using Logistic Regression

    Science.gov (United States)

    Piller, A.; Anderson, E.; Ballard, H.

    2015-12-01

    Landslides in the United States cause more than $1 billion in damages and 50 deaths per year (USGS 2014). Globally, figures are much more grave, yet monitoring, mapping and forecasting of these hazards are less than adequate. Seventy-five percent of the population of Rwanda earns a living from farming, mostly subsistence. Loss of farmland, housing, or life, to landslides is a very real hazard. Landslides in Rwanda have an impact at the economic, social, and environmental level. In a developing nation that faces challenges in tracking, cataloging, and predicting the numerous landslides that occur each year, satellite imagery and spatial analysis allow for remote study. We have focused on the development of a landslide inventory and a statistical methodology for assessing landslide hazards. Using logistic regression on approximately 30 test variables (i.e. slope, soil type, land cover, etc.) and a sample of over 200 landslides, we determine which variables are statistically most relevant to landslide occurrence in Rwanda. A preliminary predictive hazard map for Rwanda has been produced, using the variables selected from the logistic regression analysis.

  20. Real-Time Aircraft Cosmic Ray Radiation Exposure Predictions from the NAIRAS Model

    Science.gov (United States)

    Mertens, C. J.; Tobiska, W.; Kress, B. T.; Xu, X.

    2012-12-01

    The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a prototype operational model for predicting commercial aircraft radiation exposure from galactic and solar cosmic rays. NAIRAS predictions are currently streaming live from the project's public website, and the exposure rate nowcast is also available on the SpaceWx smartphone app for iPhone, IPad, and Android. Cosmic rays are the primary source of human exposure to high linear energy transfer radiation at aircraft altitudes, which increases the risk of cancer and other adverse health effects. Thus, the NAIRAS model addresses an important national need with broad societal, public health and economic benefits. There is also interest in extending NAIRAS to the LEO environment to address radiation hazard issues for the emerging commercial spaceflight industry. The processes responsible for the variability in the solar wind, interplanetary magnetic field, solar energetic particle spectrum, and the dynamical response of the magnetosphere to these space environment inputs, strongly influence the composition and energy distribution of the atmospheric ionizing radiation field. Real-time observations are required at a variety of locations within the geospace environment. The NAIRAS model is driven by real-time input data from ground-, atmospheric-, and space-based platforms. During the development of the NAIRAS model, new science questions and observational data gaps were identified that must be addressed in order to obtain a more reliable and robust operational model of atmospheric radiation exposure. The focus of this talk is to present the current capabilities of the NAIRAS model, discuss future developments in aviation radiation modeling and instrumentation, and propose strategies and methodologies of bridging known gaps in current modeling and observational capabilities.

  1. Linear non-threshold (LNT) radiation hazards model and its evaluation

    International Nuclear Information System (INIS)

    Min Rui

    2011-01-01

    In order to introduce linear non-threshold (LNT) model used in study on the dose effect of radiation hazards and to evaluate its application, the analysis of comprehensive literatures was made. The results show that LNT model is more suitable to describe the biological effects in accuracy for high dose than that for low dose. Repairable-conditionally repairable model of cell radiation effects can be well taken into account on cell survival curve in the all conditions of high, medium and low absorbed dose range. There are still many uncertainties in assessment model of effective dose of internal radiation based on the LNT assumptions and individual mean organ equivalent, and it is necessary to establish gender-specific voxel human model, taking gender differences into account. From above, the advantages and disadvantages of various models coexist. Before the setting of the new theory and new model, LNT model is still the most scientific attitude. (author)

  2. Radar image and data fusion for natural hazards characterisation

    Science.gov (United States)

    Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Jixian; Zhang, Yonghong

    2010-01-01

    Fusion of synthetic aperture radar (SAR) images through interferometric, polarimetric and tomographic processing provides an all - weather imaging capability to characterise and monitor various natural hazards. This article outlines interferometric synthetic aperture radar (InSAR) processing and products and their utility for natural hazards characterisation, provides an overview of the techniques and applications related to fusion of SAR/InSAR images with optical and other images and highlights the emerging SAR fusion technologies. In addition to providing precise land - surface digital elevation maps, SAR - derived imaging products can map millimetre - scale elevation changes driven by volcanic, seismic and hydrogeologic processes, by landslides and wildfires and other natural hazards. With products derived from the fusion of SAR and other images, scientists can monitor the progress of flooding, estimate water storage changes in wetlands for improved hydrological modelling predictions and assessments of future flood impacts and map vegetation structure on a global scale and monitor its changes due to such processes as fire, volcanic eruption and deforestation. With the availability of SAR images in near real - time from multiple satellites in the near future, the fusion of SAR images with other images and data is playing an increasingly important role in understanding and forecasting natural hazards.

  3. Integrative Chemical-Biological Read-Across Approach for Chemical Hazard Classification

    Science.gov (United States)

    Low, Yen; Sedykh, Alexander; Fourches, Denis; Golbraikh, Alexander; Whelan, Maurice; Rusyn, Ivan; Tropsha, Alexander

    2013-01-01

    Traditional read-across approaches typically rely on the chemical similarity principle to predict chemical toxicity; however, the accuracy of such predictions is often inadequate due to the underlying complex mechanisms of toxicity. Here we report on the development of a hazard classification and visualization method that draws upon both chemical structural similarity and comparisons of biological responses to chemicals measured in multiple short-term assays (”biological” similarity). The Chemical-Biological Read-Across (CBRA) approach infers each compound's toxicity from those of both chemical and biological analogs whose similarities are determined by the Tanimoto coefficient. Classification accuracy of CBRA was compared to that of classical RA and other methods using chemical descriptors alone, or in combination with biological data. Different types of adverse effects (hepatotoxicity, hepatocarcinogenicity, mutagenicity, and acute lethality) were classified using several biological data types (gene expression profiling and cytotoxicity screening). CBRA-based hazard classification exhibited consistently high external classification accuracy and applicability to diverse chemicals. Transparency of the CBRA approach is aided by the use of radial plots that show the relative contribution of analogous chemical and biological neighbors. Identification of both chemical and biological features that give rise to the high accuracy of CBRA-based toxicity prediction facilitates mechanistic interpretation of the models. PMID:23848138

  4. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  5. Prediction and validation of pool fire development in enclosures by means of CFD Models for risk assessment of nuclear power plants (Poolfire) - Report year 2

    International Nuclear Information System (INIS)

    Van Hees, P.; Wahlqvist, J.; Kong, D.; Hostikka, S.; Sikanen, T.; Husted, B.; Magnusson, T.; Joerud, F.

    2013-05-01

    Fires in nuclear power plants can be an important hazard for the overall safety of the facility. One of the typical fire sources is a pool fire. It is therefore important to have good knowledge on the fire behaviour of pool fire and be able to predict the heat release rate by prediction of the mass loss rate. This project envisages developing a pyrolysis model to be used in CFD models. In this report the activities for second year are reported, which is an overview of the experiments conducted, further development and validation of models and cases study to be selected in year 3. (Author)

  6. Prediction and validation of pool fire development in enclosures by means of CFD Models for risk assessment of nuclear power plants (Poolfire) - Report year 2

    Energy Technology Data Exchange (ETDEWEB)

    van Hees, P.; Wahlqvist, J.; Kong, D. [Lund Univ., Lund (Sweden); Hostikka, S.; Sikanen, T. [VTT Technical Research Centre of Finland (Finland); Husted, B. [Haugesund Univ. College, Stord (Norway); Magnusson, T. [Ringhals AB, Vaeroebacka (Sweden); Joerud, F. [European Spallation Source (ESS), Lund (Sweden)

    2013-05-15

    Fires in nuclear power plants can be an important hazard for the overall safety of the facility. One of the typical fire sources is a pool fire. It is therefore important to have good knowledge on the fire behaviour of pool fire and be able to predict the heat release rate by prediction of the mass loss rate. This project envisages developing a pyrolysis model to be used in CFD models. In this report the activities for second year are reported, which is an overview of the experiments conducted, further development and validation of models and cases study to be selected in year 3. (Author)

  7. Tornado hazard model with the variation effects of tornado intensity along the path length

    International Nuclear Information System (INIS)

    Hirakuchi, Hiromaru; Nohara, Daisuke; Sugimoto, Soichiro; Eguchi, Yuzuru; Hattori, Yasuo

    2015-01-01

    Most of Japanese tornados have been reported near the coast line, where all of Japanese nuclear power plants are located. It is necessary for Japanese electric power companies to assess tornado risks on the plants according to a new regulation in 2013. The new regulatory guide exemplifies a tornado hazard model, which cannot consider the variation of tornado intensity along the path length and consequently produces conservative risk estimates. The guide also recommends the long narrow strip area along the coast line with the width of 5-10 km as a region of interest, although the model tends to estimate inadequate wind speeds due to the limit of application. The purpose of this study is to propose a new tornado hazard model which can be apply to the long narrow strip area. The new model can also consider the variation of tornado intensity along the path length and across the path width. (author)

  8. Preparing a seismic hazard model for Switzerland: the view from PEGASOS Expert Group 3 (EG1c)

    Energy Technology Data Exchange (ETDEWEB)

    Musson, R. M. W. [British Geological Survey, West Mains Road, Edinburgh, EH9 3LA (United Kingdom); Sellami, S. [Swiss Seismological Service, ETH-Hoenggerberg, Zuerich (Switzerland); Bruestle, W. [Regierungspraesidium Freiburg, Abt. 9: Landesamt fuer Geologie, Rohstoffe und Bergbau, Ref. 98: Landeserdbebendienst, Freiburg im Breisgau (Germany)

    2009-05-15

    The seismic hazard model used in the PEGASOS project for assessing earth-quake hazard at four NPP sites was a composite of four sub-models, each produced by a team of three experts. In this paper, one of these models is described in detail by the authors. A criticism sometimes levelled at probabilistic seismic hazard studies is that the process by which seismic source zones are arrived at is obscure, subjective and inconsistent. Here, we attempt to recount the stages by which the model evolved, and the decisions made along the way. In particular, a macro-to-micro approach was used, in which three main stages can be described. The first was the characterisation of the overall kinematic model, the 'big picture' of regional seismo-genesis. Secondly, this was refined to a more detailed seismotectonic model. Lastly, this was used as the basis of individual sources, for which parameters can be assessed. Some basic questions had also to be answered about aspects of the approach to modelling to be used: for instance, is spatial smoothing an appropriate tool to apply? Should individual fault sources be modelled in an intra-plate environment? Also, the extent to which alternative modelling decisions should be expressed in a logic tree structure has to be considered. (author)

  9. Preparing a seismic hazard model for Switzerland: the view from PEGASOS Expert Group 3 (EG1c)

    International Nuclear Information System (INIS)

    Musson, R. M. W.; Sellami, S.; Bruestle, W.

    2009-01-01

    The seismic hazard model used in the PEGASOS project for assessing earth-quake hazard at four NPP sites was a composite of four sub-models, each produced by a team of three experts. In this paper, one of these models is described in detail by the authors. A criticism sometimes levelled at probabilistic seismic hazard studies is that the process by which seismic source zones are arrived at is obscure, subjective and inconsistent. Here, we attempt to recount the stages by which the model evolved, and the decisions made along the way. In particular, a macro-to-micro approach was used, in which three main stages can be described. The first was the characterisation of the overall kinematic model, the 'big picture' of regional seismo-genesis. Secondly, this was refined to a more detailed seismotectonic model. Lastly, this was used as the basis of individual sources, for which parameters can be assessed. Some basic questions had also to be answered about aspects of the approach to modelling to be used: for instance, is spatial smoothing an appropriate tool to apply? Should individual fault sources be modelled in an intra-plate environment? Also, the extent to which alternative modelling decisions should be expressed in a logic tree structure has to be considered. (author)

  10. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  11. Building resilience to weather-related hazards through better preparedness

    Science.gov (United States)

    Keller, Julia; Golding, Brian; Johnston, David; Ruti, Paolo

    2017-04-01

    Recent developments in weather forecasting have transformed our ability to predict weather-related hazards, while mobile communication is radically changing the way that people receive information. At the same time, vulnerability to weather-related hazards is growing through urban expansion, population growth and climate change. This talk will address issues facing the science community in responding to the Sendai Framework objective to "substantially increase the availability of and access to multi-hazard early warning systems" in the context of weather-related hazards. It will also provide an overview of activities and approaches developed in the World Meteorological Organisation's High Impact Weather (HIWeather) project. HIWeather has identified and is promoting research in key multi-disciplinary gaps in our knowledge, including in basic meteorology, risk prediction, communication and decision making, that affect our ability to provide effective warnings. The results will be pulled together in demonstration projects that will both showcase leading edge capability and build developing country capacity.

  12. Disaggregated seismic hazard and the elastic input energy spectrum: An approach to design earthquake selection

    Science.gov (United States)

    Chapman, Martin Colby

    1998-12-01

    The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression

  13. Comparative hazard analysis and toxicological modeling of diverse nanomaterials using the embryonic zebrafish (EZ) metric of toxicity

    International Nuclear Information System (INIS)

    Harper, Bryan; Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan; Tang, Kaizhi; Heredia-Langner, Alejandro; Lins, Roberto; Harper, Stacey

    2015-01-01

    The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure–toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure–activity relationships

  14. Comparative hazard analysis and toxicological modeling of diverse nanomaterials using the embryonic zebrafish (EZ) metric of toxicity

    Energy Technology Data Exchange (ETDEWEB)

    Harper, Bryan [Oregon State University (United States); Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan [Pacific Northwest National Laboratory (United States); Tang, Kaizhi [Intelligent Automation, Inc. (United States); Heredia-Langner, Alejandro [Pacific Northwest National Laboratory (United States); Lins, Roberto [CPqAM, Oswaldo Cruz Foundation, FIOCRUZ-PE (Brazil); Harper, Stacey, E-mail: stacey.harper@oregonstate.edu [Oregon State University (United States)

    2015-06-15

    The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure–toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure–activity relationships.

  15. A re-evaluation of PETROTOX for predicting acute and chronic toxicity of petroleum substances.

    Science.gov (United States)

    Redman, Aaron D; Parkerton, Thomas F; Leon Paumen, Miriam; Butler, Josh D; Letinski, Daniel J; den Haan, Klass

    2017-08-01

    The PETROTOX model was developed to perform aquatic hazard assessment of petroleum substances based on substance composition. The model relies on the hydrocarbon block method, which is widely used for conducting petroleum substance risk assessments providing further justification for evaluating model performance. Previous work described this model and provided a preliminary calibration and validation using acute toxicity data for limited petroleum substance. The objective of the present study was to re-evaluate PETROTOX using expanded data covering both acute and chronic toxicity endpoints on invertebrates, algae, and fish for a wider range of petroleum substances. The results indicated that recalibration of 2 model parameters was required, namely, the algal critical target lipid body burden and the log octanol-water partition coefficient (K OW ) limit, used to account for reduced bioavailability of hydrophobic constituents. Acute predictions from the updated model were compared with observed toxicity data and found to generally be within a factor of 3 for algae and invertebrates but overestimated fish toxicity. Chronic predictions were generally within a factor of 5 of empirical data. Furthermore, PETROTOX predicted acute and chronic hazard classifications that were consistent or conservative in 93 and 84% of comparisons, respectively. The PETROTOX model is considered suitable for the purpose of characterizing petroleum substance hazard in substance classification and risk assessments. Environ Toxicol Chem 2017;36:2245-2252. © 2017 SETAC. © 2017 SETAC.

  16. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  17. Multiscale modeling and simulation of embryogenesis for in silico predictive toxicology (WC9)

    Science.gov (United States)

    Translating big data from alternative and HTS platforms into hazard identification and risk assessment is an important need for predictive toxicology and for elucidating adverse outcome pathways (AOPs) in developmental toxicity. Understanding how chemical disruption of molecular ...

  18. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  19. Korean risk assessment model for breast cancer risk prediction.

    Science.gov (United States)

    Park, Boyoung; Ma, Seung Hyun; Shin, Aesun; Chang, Myung-Chul; Choi, Ji-Yeob; Kim, Sungwan; Han, Wonshik; Noh, Dong-Young; Ahn, Sei-Hyun; Kang, Daehee; Yoo, Keun-Young; Park, Sue K

    2013-01-01

    We evaluated the performance of the Gail model for a Korean population and developed a Korean breast cancer risk assessment tool (KoBCRAT) based upon equations developed for the Gail model for predicting breast cancer risk. Using 3,789 sets of cases and controls, risk factors for breast cancer among Koreans were identified. Individual probabilities were projected using Gail's equations and Korean hazard data. We compared the 5-year and lifetime risk produced using the modified Gail model which applied Korean incidence and mortality data and the parameter estimators from the original Gail model with those produced using the KoBCRAT. We validated the KoBCRAT based on the expected/observed breast cancer incidence and area under the curve (AUC) using two Korean cohorts: the Korean Multicenter Cancer Cohort (KMCC) and National Cancer Center (NCC) cohort. The major risk factors under the age of 50 were family history, age at menarche, age at first full-term pregnancy, menopausal status, breastfeeding duration, oral contraceptive usage, and exercise, while those at and over the age of 50 were family history, age at menarche, age at menopause, pregnancy experience, body mass index, oral contraceptive usage, and exercise. The modified Gail model produced lower 5-year risk for the cases than for the controls (p = 0.017), while the KoBCRAT produced higher 5-year and lifetime risk for the cases than for the controls (pKorean women, especially urban women.

  20. Nonhydrostatic and surfbeat model predictions of extreme wave run-up in fringing reef environments

    Science.gov (United States)

    Lashley, Christopher H.; Roelvink, Dano; van Dongeren, Ap R.; Buckley, Mark L.; Lowe, Ryan J.

    2018-01-01

    The accurate prediction of extreme wave run-up is important for effective coastal engineering design and coastal hazard management. While run-up processes on open sandy coasts have been reasonably well-studied, very few studies have focused on understanding and predicting wave run-up at coral reef-fronted coastlines. This paper applies the short-wave resolving, Nonhydrostatic (XB-NH) and short-wave averaged, Surfbeat (XB-SB) modes of the XBeach numerical model to validate run-up using data from two 1D (alongshore uniform) fringing-reef profiles without roughness elements, with two objectives: i) to provide insight into the physical processes governing run-up in such environments; and ii) to evaluate the performance of both modes in accurately predicting run-up over a wide range of conditions. XBeach was calibrated by optimizing the maximum wave steepness parameter (maxbrsteep) in XB-NH and the dissipation coefficient (alpha) in XB-SB) using the first dataset; and then applied to the second dataset for validation. XB-NH and XB-SB predictions of extreme wave run-up (Rmax and R2%) and its components, infragravity- and sea-swell band swash (SIG and SSS) and shoreline setup (), were compared to observations. XB-NH more accurately simulated wave transformation but under-predicted shoreline setup due to its exclusion of parameterized wave-roller dynamics. XB-SB under-predicted sea-swell band swash but overestimated shoreline setup due to an over-prediction of wave heights on the reef flat. Run-up (swash) spectra were dominated by infragravity motions, allowing the short-wave (but not wave group) averaged model (XB-SB) to perform comparably well to its more complete, short-wave resolving (XB-NH) counterpart. Despite their respective limitations, both modes were able to accurately predict Rmax and R2%.

  1. A physical probabilistic model to predict failure rates in buried PVC pipelines

    International Nuclear Information System (INIS)

    Davis, P.; Burn, S.; Moglia, M.; Gould, S.

    2007-01-01

    For older water pipeline materials such as cast iron and asbestos cement, future pipe failure rates can be extrapolated from large volumes of existing historical failure data held by water utilities. However, for newer pipeline materials such as polyvinyl chloride (PVC), only limited failure data exists and confident forecasts of future pipe failures cannot be made from historical data alone. To solve this problem, this paper presents a physical probabilistic model, which has been developed to estimate failure rates in buried PVC pipelines as they age. The model assumes that under in-service operating conditions, crack initiation can occur from inherent defects located in the pipe wall. Linear elastic fracture mechanics theory is used to predict the time to brittle fracture for pipes with internal defects subjected to combined internal pressure and soil deflection loading together with through-wall residual stress. To include uncertainty in the failure process, inherent defect size is treated as a stochastic variable, and modelled with an appropriate probability distribution. Microscopic examination of fracture surfaces from field failures in Australian PVC pipes suggests that the 2-parameter Weibull distribution can be applied. Monte Carlo simulation is then used to estimate lifetime probability distributions for pipes with internal defects, subjected to typical operating conditions. As with inherent defect size, the 2-parameter Weibull distribution is shown to be appropriate to model uncertainty in predicted pipe lifetime. The Weibull hazard function for pipe lifetime is then used to estimate the expected failure rate (per pipe length/per year) as a function of pipe age. To validate the model, predicted failure rates are compared to aggregated failure data from 17 UK water utilities obtained from the United Kingdom Water Industry Research (UKWIR) National Mains Failure Database. In the absence of actual operating pressure data in the UKWIR database, typical

  2. The Prospect of using Three-Dimensional Earth Models To Improve Nuclear Explosion Monitoring and Ground Motion Hazard Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Zucca, J J; Walter, W R; Rodgers, A J; Richards, P; Pasyanos, M E; Myers, S C; Lay, T; Harris, D; Antoun, T

    2008-11-19

    The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of Earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D Earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes two specific paths by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas. Seismic monitoring agencies are tasked with detection, location, and characterization of seismic activity in near real time. In the case of nuclear explosion monitoring or seismic hazard, decisions to further investigate a suspect event or to launch disaster relief efforts may rely heavily on real-time analysis and results. Because these are weighty decisions, monitoring agencies are regularly called upon to meticulously document and justify every aspect of their monitoring system. In order to meet this level of scrutiny and maintain operational robustness requirements, only mature technologies are considered for operational monitoring systems, and operational technology necessarily lags

  3. Risk Evaluation of Multiple Hazards during Sediment and Water Related Disasters in a Small Basin

    Science.gov (United States)

    Yamanoi, Kazuki; Fujita, Masaharu

    2016-04-01

    To reduce human damage due to sediment and water related disasters induced by heavy rainfall, warning and evacuation system is very important. In Japan, the Meteorological Agency issues the sediment disaster alert when the potential of sediment disaster increases. Following the alert, local government issues evacuation advisory considering the alert and premonitory phenomena. However, it is very difficult for local people to perceive the dangerousness around them because the alert and advisory do not contain any definite information. Therefore, they sometimes misjudge the evacuation action. One reason of this is not only crucial hazards but also relatively small-scale multiple hazards take place and rise evacuation difficulties during sediment and water related disaster. Examples of small-scale hazards include: rainfall-associated hazards such as poor visibility or road submergence; landslide-associated hazards such as slope failure or sediment inflow; and flood-associated hazards such as overtopping of river dike, inundation, or destruction of bridges. The purpose of this study was to estimate the risk of multiple hazards during disaster events by numerical simulation. We applied the integrated sediment runoff model on unit channels, unit slopes, and slope units to an actual sediment and water related disaster occurred in a small basin in Tamba city, Hyogo, Japan. The maximum rainfall per hour was 91 mm (17/09/2014 2:00˜3:00) and the maximum daily precipitation was 414mm. The integrated model contains semi-physical based landslide prediction (sediment production) model, rainfall runoff model employing the kinematic wave method, model of sediment supply to channels, and bedload and suspended sediment transport model. We evaluated the risk of rainfall-associated hazards in each slope unit into 4 levels (Level I ˜ IV) using the rainfall intensity Ir [mm/hour]. The risk of flood- associated hazards were also estimated using the ratio of calculated water level and

  4. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    Science.gov (United States)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  5. Reassessment of probabilistic seismic hazard in the Marmara region

    Science.gov (United States)

    Kalkan, Erol; Gulkan, Polat; Yilmaz, Nazan; Çelebi, Mehmet

    2009-01-01

    In 1999, the eastern coastline of the Marmara region (Turkey) witnessed increased seismic activity on the North Anatolian fault (NAF) system with two damaging earthquakes (M 7.4 Kocaeli and M 7.2 D??zce) that occurred almost three months apart. These events have reduced stress on the western segment of the NAF where it continues under the Marmara Sea. The undersea fault segments have been recently explored using bathymetric and reflection surveys. These recent findings helped scientists to understand the seismotectonic environment of the Marmara basin, which has remained a perplexing tectonic domain. On the basis of collected new data, seismic hazard of the Marmara region is reassessed using a probabilistic approach. Two different earthquake source models: (1) the smoothed-gridded seismicity model and (2) fault model and alternate magnitude-frequency relations, Gutenberg-Richter and characteristic, were used with local and imported ground-motion-prediction equations. Regional exposure is computed and quantified on a set of hazard maps that provide peak horizontal ground acceleration (PGA) and spectral acceleration at 0.2 and 1.0 sec on uniform firm-rock site condition (760 m=sec average shear wave velocity in the upper 30 m). These acceleration levels were computed for ground motions having 2% and 10% probabilities of exceedance in 50 yr, corresponding to return periods of about 2475 and 475 yr, respectively. The maximum PGA computed (at rock site) is 1.5g along the fault segments of the NAF zone extending into the Marmara Sea. The new maps generally show 10% to 15% increase for PGA, 0.2 and 1.0 sec spectral acceleration values across much of Marmara compared to previous regional hazard maps. Hazard curves and smooth design spectra for three site conditions: rock, soil, and soft-soil are provided for the Istanbul metropolitan area as possible tools in future risk estimates.

  6. Geological hazard monitoring system in Georgia

    Science.gov (United States)

    Gaprindashvili, George

    2017-04-01

    Georgia belongs to one of world's most complex mountainous regions according to the scale and frequency of Geological processes and damage caused to population, farmlands, and Infrastructure facilities. Geological hazards (landslide, debrisflow/mudflow, rockfall, erosion and etc.) are affecting many populated areas, agricultural fields, roads, oil and gas pipes, high-voltage electric power transmission towers, hydraulic structures, and tourist complexes. Landslides occur almost in all geomorphological zones, resulting in wide differentiation in the failure types and mechanisms and in the size-frequency distribution. In Georgia, geological hazards triggered by: 1. Activation of highly intense earthquakes; 2. Meteorological events provoking the disaster processes on the background of global climatic change; 3. Large-scale Human impact on the environment. The prediction and monitoring of Geological Hazards is a very wide theme, which involves different researchers from different spheres. Geological hazard monitoring is essential to prevent and mitigate these hazards. In past years in Georgia several monitoring system, such as Ground-based geodetic techniques, Debrisflow Early Warning System (EWS) were installed on high sensitive landslide and debrisflow areas. This work presents description of Geological hazard monitoring system in Georgia.

  7. Flash-Point prediction for binary partially miscible aqueous-organic mixtures

    OpenAIRE

    Liaw, Horng-Jang; Chen, Chien Tsun; Gerbaud, Vincent

    2008-01-01

    Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes and heterogeneous distillation processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of aqueous-organic system. To confirm the predictive efficiency of the derived flash points, the model was verified by comparing the ...

  8. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  9. Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones

    Science.gov (United States)

    Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto

    2015-04-01

    Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions

  10. Ranking of several ground-motion models for seismic hazard analysis in Iran

    International Nuclear Information System (INIS)

    Ghasemi, H; Zare, M; Fukushima, Y

    2008-01-01

    In this study, six attenuation relationships are classified with respect to the ranking scheme proposed by Scherbaum et al (2004 Bull. Seismol. Soc. Am. 94 1–22). First, the strong motions recorded during the 2002 Avaj, 2003 Bam, 2004 Kojour and 2006 Silakhor earthquakes are consistently processed. Then the normalized residual sets are determined for each selected ground-motion model, considering the strong-motion records chosen. The main advantage of these records is that corresponding information about the causative fault plane has been well studied for the selected events. Such information is used to estimate several control parameters which are essential inputs for attenuation relations. The selected relations (Zare et al (1999 Soil Dyn. Earthq. Eng. 18 101–23); Fukushima et al (2003 J. Earthq. Eng. 7 573–98); Sinaeian (2006 PhD Thesis International Institute of Earthquake Engineering and Seismology, Tehran, Iran); Boore and Atkinson (2007 PEER, Report 2007/01); Campbell and Bozorgnia (2007 PEER, Report 2007/02); and Chiou and Youngs (2006 PEER Interim Report for USGS Review)) have been deemed suitable for predicting peak ground-motion amplitudes in the Iranian plateau. Several graphical techniques and goodness-of-fit measures are also applied for statistical distribution analysis of the normalized residual sets. Such analysis reveals ground-motion models, developed using Iranian strong-motion records as the most appropriate ones in the Iranian context. The results of the present study are applicable in seismic hazard assessment projects in Iran

  11. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  12. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    Science.gov (United States)

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  13. Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals

    DEFF Research Database (Denmark)

    Li, Jianing; Scheike, Thomas; Zhang, Mei Jie

    2015-01-01

    Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter...... estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums...

  14. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  15. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  16. Regional landslide hazard assessment in a deep uncertain future

    Science.gov (United States)

    Almeida, Susana; Holcombe, Liz; Pianosi, Francesca; Wagener, Thorsten

    2017-04-01

    Landslides have many negative economic and societal impacts, including the potential for significant loss of life and damage to infrastructure. These risks are likely to be exacerbated in the future by a combination of climatic and socio-economic factors. Climate change, for example, is expected to increase the occurrence of rainfall-triggered landslides, because a warmer atmosphere tends to produce more high intensity rainfall events. Prediction of future changes in rainfall, however, is subject to high levels of uncertainty, making it challenging for decision-makers to identify the areas and populations that are most vulnerable to landslide hazards. In this study, we demonstrate how a physically-based model - the Combined Hydrology and Stability Model (CHASM) - can be used together with Global Sensitivity Analysis (GSA) to explore the underlying factors controlling the spatial distribution of landslide risks across a regional landscape, while also accounting for deep uncertainty around future rainfall conditions. We demonstrate how GSA can used to analyse CHASM which in turn represents the spatial variability of hillslope characteristics in the study region, while accounting for other uncertainties. Results are presented in the form of landslide hazard maps, utilising high-resolution digital elevation datasets for a case study in St Lucia in the Caribbean. Our findings about spatial landslide hazard drivers have important implications for data collection approaches and for long-term decision-making about land management practices.

  17. Prediction of temperature and damage in an irradiated human eye-Utilization of a detailed computer model which includes a vectorial blood stream in the choroid.

    Science.gov (United States)

    Heussner, Nico; Holl, Lukas; Nowak, Timo; Beuth, Thorsten; Spitzer, Martin S; Stork, Wilhelm

    2014-08-01

    The work presented here describes the development and use of a three-dimensional thermo-dynamic model of the human eye for the prediction of temperatures and damage thresholds under irradiation. This model takes into account the blood flow by the implementation of a vectorial blood stream in the choroid and also uses the actual physiological extensions and tissue parameters of the eye. Furthermore it considers evaporation, radiation and convection at the cornea as well as the eye lid. The predicted temperatures were successfully validated against existing eye models in terms of corneal and global thermal behaviour. The model׳s predictions were additionally checked for consistency with in-vivo temperature measurements of the cornea, the irradiated retina and its damage thresholds. These thresholds were calculated from the retinal temperatures using the Arrhenius integral. Hence the model can be used to predict the temperature increase and irradiation hazard within the human eye as long as the absorption values and the Arrhenius coefficients are known and the damage mechanism is in the thermal regime. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Toward a coupled Hazard-Vulnerability Tool for Flash Flood Impacts Prediction

    Science.gov (United States)

    Terti, Galateia; Ruin, Isabelle; Anquetin, Sandrine; Gourley, Jonathan J.

    2015-04-01

    Flash floods (FF) are high-impact, catastrophic events that result from the intersection of hydrometeorological extremes and society at small space-time scales, generally on the order of minutes to hours. Because FF events are generally localized in space and time, they are very difficult to forecast with precision and can subsequently leave people uninformed and subject to surprise in the midst of their daily activities (e.g., commuting to work). In Europe, FFs are the main source of natural hazard fatalities, although they affect smaller areas than riverine flooding. In the US, also, flash flooding is the leading cause of weather-related deaths most years, with some 200 annual fatalities. There were 954 fatalities and approximately 31 billion U.S. dollars of property damage due to floods and flash floods from 1995 to 2012 in the US. For forecasters and emergency managers the prediction of and subsequent response to impacts due to such a sudden onset and localized event remains a challenge. This research is motivated by the hypothesis that the intersection of the spatio-temporal context of the hazard with the distribution of people and their characteristics across space and time reveals different paths of vulnerability. We argue that vulnerability and the dominant impact type varies dynamically throughout the day and week according to the location under concern. Thus, indices are appropriate to develop and provide, for example, vehicle-related impacts on active population being focused on the road network during morning or evening rush hours. This study describes the methodological developments of our approach and applies our hypothesis to the case of the June 14th, 2010 flash flood event in the Oklahoma City area (Oklahoma, US). Social (i.e. population socio-economic profile), exposure (i.e. population distribution, land use), and physical (i.e. built and natural environment) data are used to compose different vulnerability products based on the forecast location

  19. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  20. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  1. Climate Prediction Center(CPC)Global Tropics Hazards and Benefits Assessment

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Global Tropics Hazards and Benefits Assessment (GTH) is an outlook product for the areas in the Tropics. Forecasts for the Week-1 and Week-2 period are given for...

  2. BEHAVIORAL HAZARD IN HEALTH INSURANCE*

    Science.gov (United States)

    Baicker, Katherine; Mullainathan, Sendhil; Schwartzstein, Joshua

    2015-01-01

    A fundamental implication of standard moral hazard models is overuse of low-value medical care because copays are lower than costs. In these models, the demand curve alone can be used to make welfare statements, a fact relied on by much empirical work. There is ample evidence, though, that people misuse care for a different reason: mistakes, or “behavioral hazard.” Much high-value care is underused even when patient costs are low, and some useless care is bought even when patients face the full cost. In the presence of behavioral hazard, welfare calculations using only the demand curve can be off by orders of magnitude or even be the wrong sign. We derive optimal copay formulas that incorporate both moral and behavioral hazard, providing a theoretical foundation for value-based insurance design and a way to interpret behavioral “nudges.” Once behavioral hazard is taken into account, health insurance can do more than just provide financial protection—it can also improve health care efficiency. PMID:23930294

  3. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  4. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  5. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  6. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  7. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  8. A brief peripheral motion contrast threshold test predicts older drivers' hazardous behaviors in simulated driving.

    Science.gov (United States)

    Henderson, Steven; Woods-Fry, Heather; Collin, Charles A; Gagnon, Sylvain; Voloaca, Misha; Grant, John; Rosenthal, Ted; Allen, Wade

    2015-05-01

    Our research group has previously demonstrated that the peripheral motion contrast threshold (PMCT) test predicts older drivers' self-report accident risk, as well as simulated driving performance. However, the PMCT is too lengthy to be a part of a battery of tests to assess fitness to drive. Therefore, we have developed a new version of this test, which takes under two minutes to administer. We assessed the motion contrast thresholds of 24 younger drivers (19-32) and 25 older drivers (65-83) with both the PMCT-10min and the PMCT-2min test and investigated if thresholds were associated with measures of simulated driving performance. Younger participants had significantly lower motion contrast thresholds than older participants and there were no significant correlations between younger participants' thresholds and any measures of driving performance. The PMCT-10min and the PMCT-2min thresholds of older drivers' predicted simulated crash risk, as well as the minimum distance of approach to all hazards. This suggests that our tests of motion processing can help predict the risk of collision or near collision in older drivers. Thresholds were also correlated with the total lane deviation time, suggesting a deficiency in processing of peripheral flow and delayed detection of adjacent cars. The PMCT-2min is an improved version of a previously validated test, and it has the potential to help assess older drivers' fitness to drive. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  10. A structure for models of hazardous materials with complex behavior

    International Nuclear Information System (INIS)

    Rodean, H.C.

    1991-01-01

    Most atmospheric dispersion models used to assess the environmental consequences of accidental releases of hazardous chemicals do not have the capability to simulate the pertinent chemical and physical processes associated with the release of the material and its mixing with the atmosphere. The purpose of this paper is to present a materials sub-model with the flexibility to simulate the chemical and physical behaviour of a variety of materials released into the atmosphere. The model, which is based on thermodynamic equilibrium, incorporates the ideal gas law, temperature-dependent vapor pressure equations, temperature-dependent dissociation reactions, and reactions with atmospheric water vapor. The model equations, written in terms of pressure ratios and dimensionless parameters, are used to construct equilibrium diagrams with temperature and the mass fraction of the material in the mixture as coordinates. The model's versatility is demonstrated by its application to the release of UF 6 and N 2 O 4 , two materials with very different physical and chemical properties. (author)

  11. A novel concurrent pictorial choice model of mood-induced relapse in hazardous drinkers.

    Science.gov (United States)

    Hardy, Lorna; Hogarth, Lee

    2017-12-01

    This study tested whether a novel concurrent pictorial choice procedure, inspired by animal self-administration models, is sensitive to the motivational effect of negative mood induction on alcohol-seeking in hazardous drinkers. Forty-eight hazardous drinkers (scoring ≥7 on the Alcohol Use Disorders Inventory) recruited from the community completed measures of alcohol dependence, depression, and drinking coping motives. Baseline alcohol-seeking was measured by percent choice to enlarge alcohol- versus food-related thumbnail images in two alternative forced-choice trials. Negative and positive mood was then induced in succession by means of self-referential affective statements and music, and percent alcohol choice was measured after each induction in the same way as baseline. Baseline alcohol choice correlated with alcohol dependence severity, r = .42, p = .003, drinking coping motives (in two questionnaires, r = .33, p = .02 and r = .46, p = .001), and depression symptoms, r = .31, p = .03. Alcohol choice was increased by negative mood over baseline (p choice was not related to gender, alcohol dependence, drinking to cope, or depression symptoms (ps ≥ .37). The concurrent pictorial choice measure is a sensitive index of the relative value of alcohol, and provides an accessible experimental model to study negative mood-induced relapse mechanisms in hazardous drinkers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Predictive capabilities of a two-dimensional model in the ground water transport of radionuclides

    International Nuclear Information System (INIS)

    Gureghian, A.B.; Beskid, N.J.; Marmer, G.J.

    1978-01-01

    The discharge of low-level radioactive waste into tailings ponds is a potential source of ground water contamination. The estimation of the radiological hazards related to the ground water transport of radionuclides from tailings retention systems depends on reasonably accurate estimates of the movement of both water and solute. A two-dimensional mathematical model having predictive capability for ground water flow and solute transport has been developed. The flow equation has been solved under steady-state conditions and the mass transport equation under transient conditions. The simultaneous solution of both equations is achieved through the finite element technique using isoparametric elements, based on the Galerkin formulation. However, in contrast to the flow equation solution, the weighting functions used in the solution of the mass transport equation have a non-symmetric form. The predictive capability of the model is demonstrated using an idealized case based on analyses of field data obtained from the sites of operating uranium mills. The pH of the solution, which regulates the variation of the distribution coefficient (K/sub d/) in a particular site, appears to be the most important factor in the assessment of the rate of migration of the elements considered herein

  13. A global probabilistic tsunami hazard assessment from earthquake sources

    Science.gov (United States)

    Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana

    2017-01-01

    Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.

  14. Quantitative structure-activity relationships for predicting potential ecological hazard of organic chemicals for use in regulatory risk assessments.

    Science.gov (United States)

    Comber, Mike H I; Walker, John D; Watts, Chris; Hermens, Joop

    2003-08-01

    The use of quantitative structure-activity relationships (QSARs) for deriving the predicted no-effect concentration of discrete organic chemicals for the purposes of conducting a regulatory risk assessment in Europe and the United States is described. In the United States, under the Toxic Substances Control Act (TSCA), the TSCA Interagency Testing Committee and the U.S. Environmental Protection Agency (U.S. EPA) use SARs to estimate the hazards of existing and new chemicals. Within the Existing Substances Regulation in Europe, QSARs may be used for data evaluation, test strategy indications, and the identification and filling of data gaps. To illustrate where and when QSARs may be useful and when their use is more problematic, an example, methyl tertiary-butyl ether (MTBE), is given and the predicted and experimental data are compared. Improvements needed for new QSARs and tools for developing and using QSARs are discussed.

  15. Long Aftershock Sequences within Continents and Implications for Earthquake Hazard Assessment

    Science.gov (United States)

    Stein, S. A.; Liu, M.

    2014-12-01

    Recent seismicity in the Tangshan region in North China has prompted concern about a repetition of the 1976 M7.8 earthquake that destroyed the city, killing more than 242,000 people. However, the decay of seismicity there implies that the recent earthquakes are probably aftershocks of the 1976 event. This 37-year sequence is an example of the phenomenon that aftershock sequences within continents are often significantly longer than the typical 10 years at plate boundaries. The long sequence of aftershocks in continents is consistent with a simple friction-based model predicting that the length of aftershock sequences varies inversely with the rate at which faults are loaded. Hence the slowly-deforming continents tend to have aftershock sequences significantly longer than at rapidly-loaded plate boundaries. This effect has two consequences for hazard assessment. First, within the heavily populated continents that are typically within plate interiors, assessments of earthquake hazards rely significantly on the assumption that the locations of small earthquakes shown by the short historical record reflect continuing deformation that will cause future large earthquakes. This assumption would lead to overestimation of the hazard in presently active areas and underestimation elsewhere, if some of these small events are aftershocks. Second, successful attempts to remove aftershocks from catalogs used for hazard assessment would underestimate the hazard, because much of the hazard is due to the aftershocks, and the declustering algorithms implicitly assume short aftershock sequences and thus do not remove long-duration ones.

  16. Household hazardous waste disposal to landfill: Using LandSim to model leachate migration

    International Nuclear Information System (INIS)

    Slack, Rebecca J.; Gronow, Jan R.; Hall, David H.; Voulvoulis, Nikolaos

    2007-01-01

    Municipal solid waste (MSW) landfill leachate contains a number of aquatic pollutants. A specific MSW stream often referred to as household hazardous waste (HHW) can be considered to contribute a large proportion of these pollutants. This paper describes the use of the LandSim (Landfill Performance Simulation) modelling program to assess the environmental consequences of leachate release from a generic MSW landfill in receipt of co-disposed HHW. Heavy metals and organic pollutants were found to migrate into the zones beneath a model landfill site over a 20,000-year period. Arsenic and chromium were found to exceed European Union and US-EPA drinking water standards at the unsaturated zone/aquifer interface, with levels of mercury and cadmium exceeding minimum reporting values (MRVs). The findings demonstrate the pollution potential arising from HHW disposal with MSW. - Aquatic pollutants linked to the disposal of household hazardous waste in municipal landfills have the potential to exist in soil and groundwater for many years

  17. Thermal regime of the lithosphere and prediction of seismic hazard in the Caspian region

    International Nuclear Information System (INIS)

    Levin, L.E.; Solodilov, L.N.; Kondorskaya, N.V.; Gasanov, A.G; Panahi, B.M.

    2002-01-01

    Full text : Prediction of seicmicity is one of elements of ecology hazard warning. In this collective research, it is elaborated in three directions : quantitative estimate of regional faults by level of seismic activity; ascertainment of space position of earthquake risk zones, determination of high seismic potential sites for the period of the next 3-5 years. During elaboration of prediction, it takes into account that peculiar feature all over the is determined by relationship of about 90 percent of earthquake hypocenters and released energy of seismic waves with elactic-brittle ayer of the lithosphere. Concetration of earthquakes epicenters is established predominantly in zones of complex structure of elastic-brittle layer where gradient of it thickness is 20-30 km. Directions of hypocenters migration in the plastic-viscous layer reveal a space position of seismic dangerous zones. All this provides a necessity for generalization of data on location of earthquakes epicenters; determination of their magnitudes, space position of regional faults and heat flow with calculation of thermal regime being made for clarification of the lithosphere and elastic-brittle thickness variations separately. General analysis includes a calculation of released seismic wave energy and determination of peculiar features of its distribution in the entire region and also studies of hypocenters migration in the plastic-viscous layer of the litosphere in time.

  18. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  19. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Water, ice and mud: Lahars and lahar hazards at ice- and snow-clad volcanoes

    Science.gov (United States)

    Waythomas, Christopher F.

    2014-01-01

    Large-volume lahars are significant hazards at ice and snow covered volcanoes. Hot eruptive products produced during explosive eruptions can generate a substantial volume of melt water that quickly evolves into highly mobile flows of ice, sediment and water. At present it is difficult to predict the size of lahars that can form at ice and snow covered volcanoes due to their complex flow character and behaviour. However, advances in experiments and numerical approaches are producing new conceptual models and new methods for hazard assessment. Eruption triggered lahars that are ice-dominated leave behind thin, almost unrecognizable sedimentary deposits, making them likely to be under-represented in the geological record.

  1. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  2. A model used to derive hazardous waste concentration limits aiming at the reduction of toxic and hazardous wastes. Applications to illustrate the discharge of secondary categories types B and C

    International Nuclear Information System (INIS)

    Paris, P.

    1989-11-01

    This report describes a model which may be used to derive hazardous waste concentration limits in order to prevent ground water pollution from a landfill disposal. First the leachate concentration limits are determined taking into account the attenuation capacity of the landfill-site as a whole; waste concentrations are then derived by an elution model which assumes a constant ratio between liquid-solid concentrations. In the example two types of landfill have been considered and in each case concentration limits have been calculated for some hazardous substances and compared with the corresponding regulatory limits. (author)

  3. Combining computational models for landslide hazard assessment of Guantánamo province, Cuba

    NARCIS (Netherlands)

    Castellanos Abella, E.A.

    2008-01-01

    As part of the Cuban system for landslide disaster management, a methodology was developed for regional scale landslide hazard assessment, which is a combination of different models. The method was applied in Guantánamo province at 1:100 000 scale. The analysis started with an extensive aerial

  4. A prediction model for 5-year cardiac mortality in patients with chronic heart failure using {sup 123}I-metaiodobenzylguanidine imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nakajima, Kenichi; Matsuo, Shinro [Kanazawa University Hospital, Department of Nuclear Medicine, Kanazawa (Japan); Nakata, Tomoaki [Sapporo Medical University School of Medicine, Second Department of Internal Medicine (Cardiology), Sapporo (Japan); Hakodate-Goryoukaku Hospital, Department of Cardiology, Hakodate (Japan); Yamada, Takahisa [Osaka Prefectural General Medical Center, Department of Cardiology, Osaka (Japan); Yamashina, Shohei [Toho University Omori Medical Center, Department of Cardiovascular Medicine, Tokyo (Japan); Momose, Mitsuru [Tokyo Women' s Medical University, Department of Nuclear Medicine, Tokyo (Japan); Kasama, Shu [Cardiovascular Hospital of Central Japan, Department of Cardiology, Shibukawa (Japan); Matsui, Toshiki [Social Insurance Shiga General Hospital, Department of Cardiology, Otsu (Japan); Travin, Mark I. [Albert Einstein Medical College, Department of Cardiology and Nuclear Medicine, Montefiore Medical Center, Bronx, NY (United States); Jacobson, Arnold F. [GE Healthcare, Medical Diagnostics, Princeton, NJ (United States)

    2014-09-15

    Prediction of mortality risk is important in the management of chronic heart failure (CHF). The aim of this study was to create a prediction model for 5-year cardiac death including assessment of cardiac sympathetic innervation using data from a multicenter cohort study in Japan. The original pooled database consisted of cohort studies from six sites in Japan. A total of 933 CHF patients who underwent {sup 123}I-metaiodobenzylguanidine (MIBG) imaging and whose 5-year outcomes were known were selected from this database. The late MIBG heart-to-mediastinum ratio (HMR) was used for quantification of cardiac uptake. Cox proportional hazard and logistic regression analyses were used to select appropriate variables for predicting 5-year cardiac mortality. The formula for predicting 5-year mortality was created using a logistic regression model. During the 5-year follow-up, 205 patients (22 %) died of a cardiac event including heart failure death, sudden cardiac death and fatal acute myocardial infarction (64 %, 30 % and 6 %, respectively). Multivariate logistic analysis selected four parameters, including New York Heart Association (NYHA) functional class, age, gender and left ventricular ejection fraction, without HMR (model 1) and five parameters with the addition of HMR (model 2). The net reclassification improvement analysis for all subjects was 13.8 % (p < 0.0001) by including HMR and its inclusion was most effective in the downward reclassification of low-risk patients. Nomograms for predicting 5-year cardiac mortality were created from the five-parameter regression model. Cardiac MIBG imaging had a significant additive value for predicting cardiac mortality. The prediction formula and nomograms can be used for risk stratifying in patients with CHF. (orig.)

  5. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  6. Three-dimensional displays for natural hazards analysis, using classified Landsat Thematic Mapper digital data and large-scale digital elevation models

    Science.gov (United States)

    Butler, David R.; Walsh, Stephen J.; Brown, Daniel G.

    1991-01-01

    Methods are described for using Landsat Thematic Mapper digital data and digital elevation models for the display of natural hazard sites in a mountainous region of northwestern Montana, USA. Hazard zones can be easily identified on the three-dimensional images. Proximity of facilities such as highways and building locations to hazard sites can also be easily displayed. A temporal sequence of Landsat TM (or similar) satellite data sets could also be used to display landscape changes associated with dynamic natural hazard processes.

  7. TRENT2D WG: a smart web infrastructure for debris-flow modelling and hazard assessment

    Science.gov (United States)

    Zorzi, Nadia; Rosatti, Giorgio; Zugliani, Daniel; Rizzi, Alessandro; Piffer, Stefano

    2016-04-01

    Mountain regions are naturally exposed to geomorphic flows, which involve large amounts of sediments and induce significant morphological modifications. The physical complexity of this class of phenomena represents a challenging issue for modelling, leading to elaborate theoretical frameworks and sophisticated numerical techniques. In general, geomorphic-flows models proved to be valid tools in hazard assessment and management. However, model complexity seems to represent one of the main obstacles to the diffusion of advanced modelling tools between practitioners and stakeholders, although the UE Flood Directive (2007/60/EC) requires risk management and assessment to be based on "best practices and best available technologies". Furthermore, several cutting-edge models are not particularly user-friendly and multiple stand-alone software are needed to pre- and post-process modelling data. For all these reasons, users often resort to quicker and rougher approaches, leading possibly to unreliable results. Therefore, some effort seems to be necessary to overcome these drawbacks, with the purpose of supporting and encouraging a widespread diffusion of the most reliable, although sophisticated, modelling tools. With this aim, this work presents TRENT2D WG, a new smart modelling solution for the state-of-the-art model TRENT2D (Armanini et al., 2009, Rosatti and Begnudelli, 2013), which simulates debris flows and hyperconcentrated flows adopting a two-phase description over a mobile bed. TRENT2D WG is a web infrastructure joining advantages offered by the software-delivering model SaaS (Software as a Service) and by WebGIS technology and hosting a complete and user-friendly working environment for modelling. In order to develop TRENT2D WG, the model TRENT2D was converted into a service and exposed on a cloud server, transferring computational burdens from the user hardware to a high-performing server and reducing computational time. Then, the system was equipped with an

  8. Enhanced outage prediction modeling for strong extratropical storms and hurricanes in the Northeastern United States

    Science.gov (United States)

    Cerrai, D.; Anagnostou, E. N.; Wanik, D. W.; Bhuiyan, M. A. E.; Zhang, X.; Yang, J.; Astitha, M.; Frediani, M. E.; Schwartz, C. S.; Pardakhti, M.

    2016-12-01

    The overwhelming majority of human activities need reliable electric power. Severe weather events can cause power outages, resulting in substantial economic losses and a temporary worsening of living conditions. Accurate prediction of these events and the communication of forecasted impacts to the affected utilities is necessary for efficient emergency preparedness and mitigation. The University of Connecticut Outage Prediction Model (OPM) uses regression tree models, high-resolution weather reanalysis and real-time weather forecasts (WRF and NCAR ensemble), airport station data, vegetation and electric grid characteristics and historical outage data to forecast the number and spatial distribution of outages in the power distribution grid located within dense vegetation. Recent OPM improvements consist of improved storm classification and addition of new predictive weather-related variables and are demonstrated using a leave-one-storm-out cross-validation based on 130 severe extratropical storms and two hurricanes (Sandy and Irene) in the Northeast US. We show that it is possible to predict the number of trouble spots causing outages in the electric grid with a median absolute percentage error as low as 27% for some storm types, and at most around 40%, in a scale that varies between four orders of magnitude, from few outages to tens of thousands. This outage information can be communicated to the electric utility to manage allocation of crews and equipment and minimize the recovery time for an upcoming storm hazard.

  9. Memory Binding Test Predicts Incident Amnestic Mild Cognitive Impairment.

    Science.gov (United States)

    Mowrey, Wenzhu B; Lipton, Richard B; Katz, Mindy J; Ramratan, Wendy S; Loewenstein, David A; Zimmerman, Molly E; Buschke, Herman

    2016-07-14

    The Memory Binding Test (MBT), previously known as Memory Capacity Test, has demonstrated discriminative validity for distinguishing persons with amnestic mild cognitive impairment (aMCI) and dementia from cognitively normal elderly. We aimed to assess the predictive validity of the MBT for incident aMCI. In a longitudinal, community-based study of adults aged 70+, we administered the MBT to 246 cognitively normal elderly adults at baseline and followed them annually. Based on previous work, a subtle reduction in memory binding at baseline was defined by a Total Items in the Paired (TIP) condition score of ≤22 on the MBT. Cox proportional hazards models were used to assess the predictive validity of the MBT for incident aMCI accounting for the effects of covariates. The hazard ratio of incident aMCI was also assessed for different prediction time windows ranging from 4 to 7 years of follow-up, separately. Among 246 controls who were cognitively normal at baseline, 48 developed incident aMCI during follow-up. A baseline MBT reduction was associated with an increased risk for developing incident aMCI (hazard ratio (HR) = 2.44, 95% confidence interval: 1.30-4.56, p = 0.005). When varying the prediction window from 4-7 years, the MBT reduction remained significant for predicting incident aMCI (HR range: 2.33-3.12, p: 0.0007-0.04). Persons with poor performance on the MBT are at significantly greater risk for developing incident aMCI. High hazard ratios up to seven years of follow-up suggest that the MBT is sensitive to early disease.

  10. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  11. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  12. Reducing aquatic hazards of industrial chemicals: probabilistic assessment of sustainable molecular design guidelines.

    Science.gov (United States)

    Connors, Kristin A; Voutchkova-Kostal, Adelina M; Kostal, Jakub; Anastas, Paul; Zimmerman, Julie B; Brooks, Bryan W

    2014-08-01

    Basic toxicological information is lacking for the majority of industrial chemicals. In addition to increasing empirical toxicity data through additional testing, prospective computational approaches to drug development aim to serve as a rational basis for the design of chemicals with reduced toxicity. Recent work has resulted in the derivation of a "rule of 2," wherein chemicals with an octanol-water partition coefficient (log P) less than 2 and a difference between the lowest unoccupied molecular orbital and the highest occupied molecular orbital (ΔE) greater than 9 (log P9 eV) are predicted to be 4 to 5 times less likely to elicit acute or chronic toxicity to model aquatic organisms. The present study examines potential reduction of aquatic toxicity hazards from industrial chemicals if these 2 molecular design guidelines were employed. Probabilistic hazard assessment approaches were used to model the likelihood of encountering industrial chemicals exceeding toxicological categories of concern both with and without the rule of 2. Modeling predicted that utilization of these molecular design guidelines for log P and ΔE would appreciably decrease the number of chemicals that would be designated to be of "high" and "very high" concern for acute and chronic toxicity to standard model aquatic organisms and end points as defined by the US Environmental Protection Agency. For example, 14.5% of chemicals were categorized as having high and very high acute toxicity to the fathead minnow model, whereas only 3.3% of chemicals conforming to the design guidelines were predicted to be in these categories. Considerations of specific chemical classes (e.g., aldehydes), chemical attributes (e.g., ionization), and adverse outcome pathways in representative species (e.g., receptor-mediated responses) could be used to derive future property guidelines for broader classes of contaminants. © 2014 SETAC.

  13. Hazard perception, risk perception, and the need for decontamination by residents exposed to soil pollution: the role of sustainability and the limits of expert knowledge.

    Science.gov (United States)

    Vandermoere, Frédéric

    2008-04-01

    This case study examines the hazard and risk perception and the need for decontamination according to people exposed to soil pollution. Using an ecological-symbolic approach (ESA), a multidisciplinary model is developed that draws upon psychological and sociological perspectives on risk perception and includes ecological variables by using data from experts' risk assessments. The results show that hazard perception is best predicted by objective knowledge, subjective knowledge, estimated knowledge of experts, and the assessed risks. However, experts' risk assessments induce an increase in hazard perception only when residents know the urgency of decontamination. Risk perception is best predicted by trust in the risk management. Additionally, need for decontamination relates to hazard perception, risk perception, estimated knowledge of experts, and thoughts about sustainability. In contrast to the knowledge deficit model, objective and subjective knowledge did not significantly relate to risk perception and need for decontamination. The results suggest that residents can make a distinction between hazards in terms of the seriousness of contamination on the one hand, and human health risks on the other hand. Moreover, next to the importance of social determinants of environmental risk perception, this study shows that the output of experts' risk assessments-or the objective risks-can create a hazard awareness rather than an alarming risk consciousness, despite residents' distrust of scientific knowledge.

  14. PHAZE, Parametric Hazard Function Estimation

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions. 2 - Methods: PHAZE assumes that the failures of a component follow a time-dependent (or non-homogenous) Poisson process and that the failure counts in non-overlapping time intervals are independent. Implicit in the independence property is the assumption that the component is restored to service immediately after any failure, with negligible repair time. The failures of one component are assumed to be independent of those of another component; a proportional hazards model is used. Data for a component are called time censored if the component is observed for a fixed time-period, or plant records covering a fixed time-period are examined, and the failure times are recorded. The number of these failures is random. Data are called failure censored if the component is kept in service until a predetermined number of failures has occurred, at which time the component is removed from service. In this case, the number of failures is fixed, but the end of the observation period equals the final failure time and is random. A typical PHAZE session consists of reading failure data from a file prepared previously, selecting one of the three models, and performing data analysis (i.e., performing the usual statistical inference about the parameters of the model, with special emphasis on the parameter(s) that determine whether the hazard function is increasing). The final goals of the inference are a point estimate

  15. VHub - Cyberinfrastructure for volcano eruption and hazards modeling and simulation

    Science.gov (United States)

    Valentine, G. A.; Jones, M. D.; Bursik, M. I.; Calder, E. S.; Gallo, S. M.; Connor, C.; Carn, S. A.; Rose, W. I.; Moore-Russo, D. A.; Renschler, C. S.; Pitman, B.; Sheridan, M. F.

    2009-12-01

    Volcanic risk is increasing as populations grow in active volcanic regions, and as national economies become increasingly intertwined. In addition to their significance to risk, volcanic eruption processes form a class of multiphase fluid dynamics with rich physics on many length and time scales. Risk significance, physics complexity, and the coupling of models to complex dynamic spatial datasets all demand the development of advanced computational techniques and interdisciplinary approaches to understand and forecast eruption dynamics. Innovative cyberinfrastructure is needed to enable global collaboration and novel scientific creativity, while simultaneously enabling computational thinking in real-world risk mitigation decisions - an environment where quality control, documentation, and traceability are key factors. Supported by NSF, we are developing a virtual organization, referred to as VHub, to address this need. Overarching goals of the VHub project are: Dissemination. Make advanced modeling and simulation capabilities and key data sets readily available to researchers, students, and practitioners around the world. Collaboration. Provide a mechanism for participants not only to be users but also co-developers of modeling capabilities, and contributors of experimental and observational data sets for use in modeling and simulation, in a collaborative environment that reaches far beyond local work groups. Comparison. Facilitate comparison between different models in order to provide the practitioners with guidance for choosing the "right" model, depending upon the intended use, and provide a platform for multi-model analysis of specific problems and incorporation into probabilistic assessments. Application. Greatly accelerate access and application of a wide range of modeling tools and related data sets to agencies around the world that are charged with hazard planning, mitigation, and response. Education. Provide resources that will promote the training of the

  16. Development of a multivariate model to predict the likelihood of carcinoma in patients with indeterminate peripheral lung nodules after a nondiagnostic bronchoscopic evaluation.

    Science.gov (United States)

    Voss, Jesse S; Iqbal, Seher; Jenkins, Sarah M; Henry, Michael R; Clayton, Amy C; Jett, James R; Kipp, Benjamin R; Halling, Kevin C; Maldonado, Fabien

    2014-01-01

    Studies have shown that fluorescence in situ hybridization (FISH) testing increases lung cancer detection on cytology specimens in peripheral nodules. The goal of this study was to determine whether a predictive model using clinical features and routine cytology with FISH results could predict lung malignancy after a nondiagnostic bronchoscopic evaluation. Patients with an indeterminate peripheral lung nodule that had a nondiagnostic bronchoscopic evaluation were included in this study (N = 220). FISH was performed on residual bronchial brushing cytology specimens diagnosed as negative (n = 195), atypical (n = 16), or suspicious (n = 9). FISH results included hypertetrasomy (n = 30) and negative (n = 190). Primary study end points included lung cancer status along with time to diagnosis of lung cancer or date of last clinical follow-up. Hazard ratios (HRs) were calculated using Cox proportional hazards regression model analyses, and P values < .05 were considered statistically significant. The mean age of the 220 patients was 66.7 years (range, 35-91), and most (58%) were men. Most patients (79%) were current or former smokers with a mean pack year history of 43.2 years (median, 40; range, 1-200). After multivariate analysis, hypertetrasomy FISH (HR = 2.96, P < .001), pack years (HR = 1.03 per pack year up to 50, P = .001), age (HR = 1.04 per year, P = .02), atypical or suspicious cytology (HR = 2.02, P = .04), and nodule spiculation (HR = 2.36, P = .003) were independent predictors of malignancy over time and were used to create a prediction model (C-statistic = 0.78). These results suggest that this multivariate model including test results and clinical features may be useful following a nondiagnostic bronchoscopic examination. © 2013.

  17. Development and validation of a predictive risk model for all-cause mortality in type 2 diabetes.

    Science.gov (United States)

    Robinson, Tom E; Elley, C Raina; Kenealy, Tim; Drury, Paul L

    2015-06-01

    Type 2 diabetes is common and is associated with an approximate 80% increase in the rate of mortality. Management decisions may be assisted by an estimate of the patient's absolute risk of adverse outcomes, including death. This study aimed to derive a predictive risk model for all-cause mortality in type 2 diabetes. We used primary care data from a large national multi-ethnic cohort of patients with type 2 diabetes in New Zealand and linked mortality records to develop a predictive risk model for 5-year risk of mortality. We then validated this model using information from a separate cohort of patients with type 2 diabetes. 26,864 people were included in the development cohort with a median follow up time of 9.1 years. We developed three models initially using demographic information and then progressively more clinical detail. The final model, which also included markers of renal disease, proved to give best prediction of all-cause mortality with a C-statistic of 0.80 in the development cohort and 0.79 in the validation cohort (7610 people) and was well calibrated. Ethnicity was a major factor with hazard ratios of 1.37 for indigenous Maori, 0.41 for East Asian and 0.55 for Indo Asian compared with European (P<0.001). We have developed a model using information usually available in primary care that provides good assessment of patient's risk of death. Results are similar to models previously published from smaller cohorts in other countries and apply to a wider range of patient ethnic groups. Copyright © 2015. Published by Elsevier Ireland Ltd.

  18. Earthquake Hazard Assessment: an Independent Review

    Science.gov (United States)

    Kossobokov, Vladimir

    2016-04-01

    Seismic hazard assessment (SHA), from term-less (probabilistic PSHA or deterministic DSHA) to time-dependent (t-DASH) including short-term earthquake forecast/prediction (StEF), is not an easy task that implies a delicate application of statistics to data of limited size and different accuracy. Regretfully, in many cases of SHA, t-DASH, and StEF, the claims of a high potential and efficiency of the methodology are based on a flawed application of statistics and hardly suitable for communication to decision makers. The necessity and possibility of applying the modified tools of Earthquake Prediction Strategies, in particular, the Error Diagram, introduced by G.M. Molchan in early 1990ies for evaluation of SHA, and the Seismic Roulette null-hypothesis as a measure of the alerted space, is evident, and such a testing must be done in advance claiming hazardous areas and/or times. The set of errors, i.e. the rates of failure and of the alerted space-time volume, compared to those obtained in the same number of random guess trials permits evaluating the SHA method effectiveness and determining the optimal choice of the parameters in regard to specified cost-benefit functions. These and other information obtained in such a testing may supply us with a realistic estimate of confidence in SHA results and related recommendations on the level of risks for decision making in regard to engineering design, insurance, and emergency management. These basics of SHA evaluation are exemplified with a few cases of misleading "seismic hazard maps", "precursors", and "forecast/prediction methods".

  19. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  20. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  1. Eolian Modeling System: Predicting Windblown Dust Hazards in Battlefield Environments

    Science.gov (United States)

    2011-05-03

    trend (i.e., a straight line on log-log scales) given by R ∝ T–α, (1) where R is the accumulation rate, T is the time interval of accumulation, and α...Figure 5(A) for a representative set of model parameters. The straight line labeled by h represents a linear increase in epipedon thickness with time...Pelletier, Frequency-magnitude distribution of eolian transport and the geomorphically most-effective windstorm , submitted but not accepted to Geophysical

  2. Cox proportional hazards models have more statistical power than logistic regression models in cross-sectional genetic association studies

    NARCIS (Netherlands)

    van der Net, Jeroen B.; Janssens, A. Cecile J. W.; Eijkemans, Marinus J. C.; Kastelein, John J. P.; Sijbrands, Eric J. G.; Steyerberg, Ewout W.

    2008-01-01

    Cross-sectional genetic association studies can be analyzed using Cox proportional hazards models with age as time scale, if age at onset of disease is known for the cases and age at data collection is known for the controls. We assessed to what degree and under what conditions Cox proportional

  3. A "mental models" approach to the communication of subsurface hydrology and hazards

    Science.gov (United States)

    Gibson, Hazel; Stewart, Iain S.; Pahl, Sabine; Stokes, Alison

    2016-05-01

    Communicating information about geological and hydrological hazards relies on appropriately worded communications targeted at the needs of the audience. But what are these needs, and how does the geoscientist discern them? This paper adopts a psychological "mental models" approach to assess the public perception of the geological subsurface, presenting the results of attitudinal studies and surveys in three communities in the south-west of England. The findings reveal important preconceptions and misconceptions regarding the impact of hydrological systems and hazards on the geological subsurface, notably in terms of the persistent conceptualisation of underground rivers and the inferred relations between flooding and human activity. The study demonstrates how such mental models can provide geoscientists with empirical, detailed and generalised data of perceptions surrounding an issue, as well reveal unexpected outliers in perception that they may not have considered relevant, but which nevertheless may locally influence communication. Using this approach, geoscientists can develop information messages that more directly engage local concerns and create open engagement pathways based on dialogue, which in turn allow both geoscience "experts" and local "non-experts" to come together and understand each other more effectively.

  4. Modelling the impact of causal and non-causal factors on disruption duration for Toronto's subway system: An exploratory investigation using hazard modelling.

    Science.gov (United States)

    Louie, Jacob; Shalaby, Amer; Habib, Khandker Nurul

    2017-01-01

    Most investigations of incident-related delay duration in the transportation context are restricted to highway traffic, with little attention given to delays due to transit service disruptions. Studies of transit-based delay duration are also considerably less comprehensive than their highway counterparts with respect to examining the effects of non-causal variables on the delay duration. However, delays due to incidents in public transit service can have serious consequences on the overall urban transportation system due to the pivotal and vital role of public transit. The ability to predict the durations of various types of transit system incidents is indispensable for better management and mitigation of service disruptions. This paper presents a detailed investigation on incident delay durations in Toronto's subway system over the year 2013, focusing on the effects of the incidents' location and time, the train-type involved, and the non-adherence to proper recovery procedures. Accelerated Failure Time (AFT) hazard models are estimated to investigate the relationship between these factors and the resulting delay duration. The empirical investigation reveals that incident types that impact both safety and operations simultaneously generally have longer expected delays than incident types that impact either safety or operations alone. Incidents at interchange stations are cleared faster than incidents at non-interchange stations. Incidents during peak periods have nearly the same delay durations as off-peak incidents. The estimated models are believed to be useful tools in predicting the relative magnitude of incident delay duration for better management of subway operations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Predictive Modeling of a Paradigm Mechanical Cooling Tower Model: II. Optimal Best-Estimate Results with Reduced Predicted Uncertainties

    Directory of Open Access Journals (Sweden)

    Ruixian Fang

    2016-09-01

    Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.

  6. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  7. Prediction of Major Vascular Events after Stroke

    DEFF Research Database (Denmark)

    Ovbiagele, Bruce; Goldstein, Larry B.; Amarenco, Pierre

    2014-01-01

    BACKGROUND: Identifying patients with recent stroke or transient ischemic attack (TIA) at high risk of major vascular events (MVEs; stroke, myocardial infarction, or vascular death) may help optimize the intensity of secondary preventive interventions. We evaluated the relationships between...... the baseline Framingham Coronary Risk Score (FCRS) and a novel risk prediction model and with the occurrence of MVEs after stroke or TIA in subjects enrolled in the Stroke Prevention by Aggressive Reduction in Cholesterol Level (SPARCL) trial. METHODS: Data from the 4731 subjects enrolled in the SPARCL study...... were analyzed. Hazard ratios (HRs) from Cox regression models were used to determine the risk of subsequent MVEs based on the FCRS predicting 20% or more 10-year coronary heart disease risk. The novel risk model was derived based on multivariable modeling with backward selection. Model discrimination...

  8. Integrating expert opinion with modelling for quantitative multi-hazard risk assessment in the Eastern Italian Alps

    Science.gov (United States)

    Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.

    2016-11-01

    Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.

  9. Modeling Flood Hazard Zones at the Sub-District Level with the Rational Model Integrated with GIS and Remote Sensing Approaches

    Directory of Open Access Journals (Sweden)

    Daniel Asare-Kyei

    2015-07-01

    Full Text Available Robust risk assessment requires accurate flood intensity area mapping to allow for the identification of populations and elements at risk. However, available flood maps in West Africa lack spatial variability while global datasets have resolutions too coarse to be relevant for local scale risk assessment. Consequently, local disaster managers are forced to use traditional methods such as watermarks on buildings and media reports to identify flood hazard areas. In this study, remote sensing and Geographic Information System (GIS techniques were combined with hydrological and statistical models to delineate the spatial limits of flood hazard zones in selected communities in Ghana, Burkina Faso and Benin. The approach involves estimating peak runoff concentrations at different elevations and then applying statistical methods to develop a Flood Hazard Index (FHI. Results show that about half of the study areas fall into high intensity flood zones. Empirical validation using statistical confusion matrix and the principles of Participatory GIS show that flood hazard areas could be mapped at an accuracy ranging from 77% to 81%. This was supported with local expert knowledge which accurately classified 79% of communities deemed to be highly susceptible to flood hazard. The results will assist disaster managers to reduce the risk to flood disasters at the community level where risk outcomes are first materialized.

  10. Scenario for a Short-Term Probabilistic Seismic Hazard Assessment (PSHA in Chiayi, Taiwan

    Directory of Open Access Journals (Sweden)

    Chung-Han Chan

    2013-01-01

    Full Text Available Using seismic activity and the Meishan earthquake sequence that occurred from 1904 to 1906, a scenario for short-term probabilistic seismic hazards in the Chiayi region of Taiwan is assessed. The long-term earthquake occurrence rate in Taiwan was evaluated using a smoothing kernel. The highest seismicity rate was calculated around the Chiayi region. To consider earthquake interactions, the rate-and-state friction model was introduced to estimate the seismicity rate evolution due to the Coulomb stress change. As imparted by the 1904 Touliu earthquake, stress changes near the 1906 Meishan and Yangshuigang epicenters was higher than the magnitude of tidal triggering. With regard to the impact of the Meishan earthquake, the region close to the Yangshuigang earthquake epicenter had a +0.75 bar stress increase. The results indicated significant interaction between the three damage events. Considering the path and site effect using ground motion prediction equations, a probabilistic seismic hazard in the form of a hazard evolution and a hazard map was assessed. A significant elevation in hazards following the three earthquakes in the sequence was determined. The results illustrate a possible scenario for seismic hazards in the Chiayi region which may take place repeatly in the future. Such scenario provides essential information on earthquake preparation, devastation estimations, emergency sheltering, utility restoration, and structure reconstruction.

  11. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  12. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  13. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  14. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...

  15. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  16. Landslides Hazard Assessment Using Different Approaches

    Directory of Open Access Journals (Sweden)

    Coman Cristina

    2017-06-01

    Full Text Available Romania represents one of Europe’s countries with high landslides occurrence frequency. Landslide hazard maps are designed by considering the interaction of several factors which, by their joint action may affect the equilibrium state of the natural slopes. The aim of this paper is landslides hazard assessment using the methodology provided by the Romanian national legislation and a very largely used statistical method. The final results of these two analyses are quantitative or semi-quantitative landslides hazard maps, created in geographic information system environment. The data base used for this purpose includes: geological and hydrogeological data, digital terrain model, hydrological data, land use, seismic action, anthropic action and an inventory of active landslides. The GIS landslides hazard models were built for the geographical area of the Iasi city, located in the north-east side of Romania.

  17. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  18. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  19. Seismic hazard, risk, and design for South America

    Science.gov (United States)

    Petersen, Mark D.; Harmsen, Stephen; Jaiswal, Kishor; Rukstales, Kenneth S.; Luco, Nicolas; Haller, Kathleen; Mueller, Charles; Shumway, Allison

    2018-01-01

    We calculate seismic hazard, risk, and design criteria across South America using the latest data, models, and methods to support public officials, scientists, and engineers in earthquake risk mitigation efforts. Updated continental scale seismic hazard models are based on a new seismicity catalog, seismicity rate models, evaluation of earthquake sizes, fault geometry and rate parameters, and ground‐motion models. Resulting probabilistic seismic hazard maps show peak ground acceleration, modified Mercalli intensity, and spectral accelerations at 0.2 and 1 s periods for 2%, 10%, and 50% probabilities of exceedance in 50 yrs. Ground shaking soil amplification at each site is calculated by considering uniform soil that is applied in modern building codes or by applying site‐specific factors based on VS30">VS30 shear‐wave velocities determined through a simple topographic proxy technique. We use these hazard models in conjunction with the Prompt Assessment of Global Earthquakes for Response (PAGER) model to calculate economic and casualty risk. Risk is computed by incorporating the new hazard values amplified by soil, PAGER fragility/vulnerability equations, and LandScan 2012 estimates of population exposure. We also calculate building design values using the guidelines established in the building code provisions. Resulting hazard and associated risk is high along the northern and western coasts of South America, reaching damaging levels of ground shaking in Chile, western Argentina, western Bolivia, Peru, Ecuador, Colombia, Venezuela, and in localized areas distributed across the rest of the continent where historical earthquakes have occurred. Constructing buildings and other structures to account for strong shaking in these regions of high hazard and risk should mitigate losses and reduce casualties from effects of future earthquake strong ground shaking. National models should be developed by scientists and engineers in each country using the best

  20. Assessment of Debris Flow Potential Hazardous Zones Using Numerical Models in the Mountain Foothills of Santiago, Chile

    Science.gov (United States)

    Celis, C.; Sepulveda, S. A.; Castruccio, A.; Lara, M.

    2017-12-01

    Debris and mudflows are some of the main geological hazards in the mountain foothills of Central Chile. The risk of flows triggered in the basins of ravines that drain the Andean frontal range into the capital city, Santiago, increases with time due to accelerated urban expansion. Susceptibility assessments were made by several authors to detect the main active ravines in the area. Macul and San Ramon ravines have a high to medium debris flow susceptibility, whereas Lo Cañas, Apoquindo and Las Vizcachas ravines have a medium to low debris flow susceptibility. This study emphasizes in delimiting the potential hazardous zones using the numerical simulation program RAMMS-Debris Flows with the Voellmy model approach, and the debris-flow model LAHARZ. This is carried out by back-calculating the frictional parameters in the depositional zone with a known event as the debris and mudflows in Macul and San Ramon ravines, on May 3rd, 1993, for the RAMMS approach. In the same scenario, we calibrate the coefficients to match conditions of the mountain foothills of Santiago for the LAHARZ model. We use the information obtained for every main ravine in the study area, mainly for the similarity in slopes and material transported. Simulations were made for the worst-case scenario, caused by the combination of intense rainfall storms, a high 0°C isotherm level and material availability in the basins where the flows are triggered. The results show that the runout distances are well simulated, therefore a debris-flow hazard map could be developed with these models. Correlation issues concerning the run-up, deposit thickness and transversal areas are reported. Hence, the models do not represent entirely the complexity of the phenomenon, but they are a reliable approximation for preliminary hazard maps.

  1. Seismic hazard analysis. A methodology for the Eastern United States

    Energy Technology Data Exchange (ETDEWEB)

    Bernreuter, D L

    1980-08-01

    This report presents a probabilistic approach for estimating the seismic hazard in the Central and Eastern United States. The probabilistic model (Uniform Hazard Methodology) systematically incorporates the subjective opinion of several experts in the evaluation of seismic hazard. Subjective input, assumptions and associated hazard are kept separate for each expert so as to allow review and preserve diversity of opinion. The report is organized into five sections: Introduction, Methodology Comparison, Subjective Input, Uniform Hazard Methodology (UHM), and Uniform Hazard Spectrum. Section 2 Methodology Comparison, briefly describes the present approach and compares it with other available procedures. The remainder of the report focuses on the UHM. Specifically, Section 3 describes the elicitation of subjective input; Section 4 gives details of various mathematical models (earthquake source geometry, magnitude distribution, attenuation relationship) and how these models re combined to calculate seismic hazard. The lost section, Uniform Hazard Spectrum, highlights the main features of typical results. Specific results and sensitivity analyses are not presented in this report. (author)

  2. Tsunami-hazard assessment based on subaquatic slope-failure susceptibility and tsunami-inundation modeling

    Science.gov (United States)

    Anselmetti, Flavio; Hilbe, Michael; Strupler, Michael; Baumgartner, Christoph; Bolz, Markus; Braschler, Urs; Eberli, Josef; Liniger, Markus; Scheiwiller, Peter; Strasser, Michael

    2015-04-01

    Due to their smaller dimensions and confined bathymetry, lakes act as model oceans that may be used as analogues for the much larger oceans and their margins. Numerous studies in the perialpine lakes of Central Europe have shown that their shores were repeatedly struck by several-meters-high tsunami waves, which were caused by subaquatic slides usually triggered by earthquake shaking. A profound knowledge of these hazards, their intensities and recurrence rates is needed in order to perform thorough tsunami-hazard assessment for the usually densely populated lake shores. In this context, we present results of a study combining i) basinwide slope-stability analysis of subaquatic sediment-charged slopes with ii) identification of scenarios for subaquatic slides triggered by seismic shaking, iii) forward modeling of resulting tsunami waves and iv) mapping of intensity of onshore inundation in populated areas. Sedimentological, stratigraphical and geotechnical knowledge of the potentially unstable sediment drape on the slopes is required for slope-stability assessment. Together with critical ground accelerations calculated from already failed slopes and paleoseismic recurrence rates, scenarios for subaquatic sediment slides are established. Following a previously used approach, the slides are modeled as a Bingham plastic on a 2D grid. The effect on the water column and wave propagation are simulated using the shallow-water equations (GeoClaw code), which also provide data for tsunami inundation, including flow depth, flow velocity and momentum as key variables. Combining these parameters leads to so called «intensity maps» for flooding that provide a link to the established hazard mapping framework, which so far does not include these phenomena. The current versions of these maps consider a 'worst case' deterministic earthquake scenario, however, similar maps can be calculated using probabilistic earthquake recurrence rates, which are expressed in variable amounts of

  3. Risk Prediction of One-Year Mortality in Patients with Cardiac Arrhythmias Using Random Survival Forest

    Directory of Open Access Journals (Sweden)

    Fen Miao

    2015-01-01

    Full Text Available Existing models for predicting mortality based on traditional Cox proportional hazard approach (CPH often have low prediction accuracy. This paper aims to develop a clinical risk model with good accuracy for predicting 1-year mortality in cardiac arrhythmias patients using random survival forest (RSF, a robust approach for survival analysis. 10,488 cardiac arrhythmias patients available in the public MIMIC II clinical database were investigated, with 3,452 deaths occurring within 1-year followups. Forty risk factors including demographics and clinical and laboratory information and antiarrhythmic agents were analyzed as potential predictors of all-cause mortality. RSF was adopted to build a comprehensive survival model and a simplified risk model composed of 14 top risk factors. The built comprehensive model achieved a prediction accuracy of 0.81 measured by c-statistic with 10-fold cross validation. The simplified risk model also achieved a good accuracy of 0.799. Both results outperformed traditional CPH (which achieved a c-statistic of 0.733 for the comprehensive model and 0.718 for the simplified model. Moreover, various factors are observed to have nonlinear impact on cardiac arrhythmias prognosis. As a result, RSF based model which took nonlinearity into account significantly outperformed traditional Cox proportional hazard model and has great potential to be a more effective approach for survival analysis.

  4. Analysis of uncertainties in a probabilistic seismic hazard estimation, example for France

    International Nuclear Information System (INIS)

    Beauval, C.

    2003-12-01

    This thesis proposes a new methodology that allows to pinpoint the key parameters that control probabilistic seismic hazard assessment (PSHA) and at the same time to quantify the impact of these parameters uncertainties on hazard estimates. Cornell-McGuire's method is used here. First, uncertainties on magnitude and location determinations are modeled and quantified: resulting variability on hazard estimates ranges between 5% and 25% (=COV), depending on the site and the return period. An impact study is then performed, in order to determine the hierarchy between the impacts on hazard of the choices of four other parameters: intensity-magnitude correlation, minimum and maximum magnitudes, the truncation of the attenuation relationship. The results at 34 Hz (PGA) indicate that the maximum magnitude is the less influent parameter (from 100 to 10000 years); whereas the intensity-magnitude correlation and the truncation of ground motion predictions (>2σ) are the controlling parameters at all return periods (up to 30% decrease each at 10000 years). An increase in the minimum magnitude contributing to the hazard, from 3.5 to 4.5, can also produce non-negligible impacts at small return periods (up to 20% decrease of hazard results at 475 years). Finally, the overall variability on hazard estimates due to the combined choices of the four parameters can reach up to 30% (COV, at 34 Hz). For lower frequencies (<5 Hz), the overall variability increases and maximum magnitude becomes a controlling parameter. Therefore, variability of estimates due to catalog uncertainties and to the choices of these four parameters must be taken into account in all probabilistic seismic hazard studies in France. To reduce variability in hazard estimates, future research should concentrate on the elaboration of an appropriate intensity- magnitude correlation, as well as on a more realistic way of taking into account ground motion dispersion. (author)

  5. Toward risk assessment 2.0: Safety supervisory control and model-based hazard monitoring for risk-informed safety interventions

    International Nuclear Information System (INIS)

    Favarò, Francesca M.; Saleh, Joseph H.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a staple in the engineering risk community, and it has become to some extent synonymous with the entire quantitative risk assessment undertaking. Limitations of PRA continue to occupy researchers, and workarounds are often proposed. After a brief review of this literature, we propose to address some of PRA's limitations by developing a novel framework and analytical tools for model-based system safety, or safety supervisory control, to guide safety interventions and support a dynamic approach to risk assessment and accident prevention. Our work shifts the emphasis from the pervading probabilistic mindset in risk assessment toward the notions of danger indices and hazard temporal contingency. The framework and tools here developed are grounded in Control Theory and make use of the state-space formalism in modeling dynamical systems. We show that the use of state variables enables the definition of metrics for accident escalation, termed hazard levels or danger indices, which measure the “proximity” of the system state to adverse events, and we illustrate the development of such indices. Monitoring of the hazard levels provides diagnostic information to support both on-line and off-line safety interventions. For example, we show how the application of the proposed tools to a rejected takeoff scenario provides new insight to support pilots’ go/no-go decisions. Furthermore, we augment the traditional state-space equations with a hazard equation and use the latter to estimate the times at which critical thresholds for the hazard level are (b)reached. This estimation process provides important prognostic information and produces a proxy for a time-to-accident metric or advance notice for an impending adverse event. The ability to estimate these two hazard coordinates, danger index and time-to-accident, offers many possibilities for informing system control strategies and improving accident prevention and risk mitigation

  6. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Science.gov (United States)

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  7. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  8. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  9. DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network.

    Science.gov (United States)

    Katzman, Jared L; Shaham, Uri; Cloninger, Alexander; Bates, Jonathan; Jiang, Tingting; Kluger, Yuval

    2018-02-26

    Medical practitioners use survival models to explore and understand the relationships between patients' covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems. We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations. We perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient's covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient's features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it's personalized treatment recommendations would increase the survival time of a set of patients. The predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient's characteristics on their risk of failure.

  10. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  11. A methodology for physically based rockfall hazard assessment

    Directory of Open Access Journals (Sweden)

    G. B. Crosta

    2003-01-01

    Full Text Available Rockfall hazard assessment is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. The mobility of rockfalls implies a more difficult hazard definition with respect to other slope instabilities with minimal runout. Rockfall hazard assessment involves complex definitions for "occurrence probability" and "intensity". This paper is an attempt to evaluate rockfall hazard using the results of 3-D numerical modelling on a topography described by a DEM. Maps portraying the maximum frequency of passages, velocity and height of blocks at each model cell, are easily combined in a GIS in order to produce physically based rockfall hazard maps. Different methods are suggested and discussed for rockfall hazard mapping at a regional and local scale both along linear features or within exposed areas. An objective approach based on three-dimensional matrixes providing both a positional "Rockfall Hazard Index" and a "Rockfall Hazard Vector" is presented. The opportunity of combining different parameters in the 3-D matrixes has been evaluated to better express the relative increase in hazard. Furthermore, the sensitivity of the hazard index with respect to the included variables and their combinations is preliminarily discussed in order to constrain as objective as possible assessment criteria.

  12. Chemical agnostic hazard prediction: Statistical inference of toxicity pathways - data for Figure 2

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset comprises one SigmaPlot 13 file containing measured survival data and survival data predicted from the model coefficients selected by the LASSO...

  13. Natural phenomena hazards project for Department of Energy sites

    International Nuclear Information System (INIS)

    Coats, D.W.

    1985-01-01

    Lawrence Livermore National Laboratory (LLNL) has developed seismic and wind hazard models for the Office of Nuclear Safety (ONS), Department of Energy (DOE). The work is part of a three-phase effort aimed at establishing uniform building design criteria for seismic and wind hazards at DOE sites throughout the United States. In Phase 1, LLNL gathered information on the sites and their critical facilities, including nuclear reactors, fuel-reprocessing plants, high-level waste storage and treatment facilities, and special nuclear material facilities. In Phase 2, development of seismic and wind hazard models, was initiated. These hazard models express the annual probability that the site will experience an earthquake or wind speed greater than some specified magnitude. In the final phase, it is anticipated that the DOE will use the hazard models to establish uniform criteria for the design and evaluation of critical facilities. 13 references, 2 figures, 1 table

  14. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  15. Wicked Problems in Natural Hazard Assessment and Mitigation

    Science.gov (United States)

    Stein, S.; Steckler, M. S.; Rundle, J. B.; Dixon, T. H.

    2017-12-01

    Social scientists have defined "wicked" problems that are "messy, ill-defined, more complex than we fully grasp, and open to multiple interpretations based on one's point of view... No solution to a wicked problem is permanent or wholly satisfying, which leaves every solution open to easy polemical attack." These contrast with "tame" problems in which necessary information is available and solutions - even if difficult and expensive - are straightforward to identify and execute. Updating the U.S.'s aging infrastructure is a tame problem, because what is wrong and how to fix it are clear. In contrast, addressing climate change is a wicked problem because its effects are uncertain and the best strategies to address them are unclear. An analogous approach can be taken to natural hazard problems. In tame problems, we have a good model of the process, good information about past events, and data implying that the model should predict future events. In such cases, we can make a reasonable assessment of the hazard that can be used to develop mitigation strategies. Earthquake hazard mitigation for San Francisco is a relatively tame problem. We understand how the earthquakes result from known plate motions, have information about past earthquakes, and have geodetic data implying that future similar earthquakes will occur. As a result, it is straightforward to develop and implement mitigation strategies. However, in many cases, hazard assessment and mitigation is a wicked problem. How should we prepare for a great earthquake on plate boundaries where tectonics favor such events but we have no evidence that they have occurred and hence how large they may be or how often to expect them? How should we assess the hazard within plates, for example in the New Madrid seismic zone, where large earthquakes have occurred but we do not understand their causes and geodetic data show no strain accumulating? How can we assess the hazard and make sensible policy when the recurrence of

  16. Hazard assessments of double-shell flammable gas tanks

    International Nuclear Information System (INIS)

    Fox, G.L.; Stepnewski, D.D.

    1994-01-01

    This report is the fourth in a series of hazard assessments performed on the double-shell flammable gas watch list tanks. This report focuses on hazards associated with the double-shell watch list tanks (101-AW, 103-AN, 104-AN, and 105-AN). While a similar assessment has already been performed for tank 103-SY, it is also included here to incorporate a more representative slurry gas mixture and provide a consistent basis for comparing results for all the flammable gas tanks. This report is intended to provide an in-depth assessment by considering the details of the gas release event and slurry gas mixing as the gas is released from the waste. The consequences of postulated gas ignition are evaluated using a plume burn model and updated ignition frequency predictions. Tank pressurization which results from a gas burn, along with the structural response, is also considered. The report is intended to support the safety basis for work activities in flammable gas tanks by showing margins to safety limits that are available in the design and procedures

  17. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  18. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  19. The Deflector Selector: A machine learning framework for prioritizing hazardous object deflection technology development

    Science.gov (United States)

    Nesvold, E. R.; Greenberg, A.; Erasmus, N.; van Heerden, E.; Galache, J. L.; Dahlstrom, E.; Marchis, F.

    2018-05-01

    Several technologies have been proposed for deflecting a hazardous Solar System object on a trajectory that would otherwise impact the Earth. The effectiveness of each technology depends on several characteristics of the given object, including its orbit and size. The distribution of these parameters in the likely population of Earth-impacting objects can thus determine which of the technologies are most likely to be useful in preventing a collision with the Earth. None of the proposed deflection technologies has been developed and fully tested in space. Developing every proposed technology is currently prohibitively expensive, so determining now which technologies are most likely to be effective would allow us to prioritize a subset of proposed deflection technologies for funding and development. We present a new model, the Deflector Selector, that takes as its input the characteristics of a hazardous object or population of such objects and predicts which technology would be able to perform a successful deflection. The model consists of a machine-learning algorithm trained on data produced by N-body integrations simulating the deflections. We describe the model and present the results of tests of the effectiveness of nuclear explosives, kinetic impactors, and gravity tractors on three simulated populations of hazardous objects.

  20. The Deflector Selector: A Machine Learning Framework for Prioritizing Hazardous Object Deflection Technology Development

    Science.gov (United States)

    Nesvold, Erika; Greenberg, Adam; Erasmus, Nicolas; Van Heerden, Elmarie; Galache, J. L.; Dahlstrom, Eric; Marchis, Franck

    2018-01-01

    Several technologies have been proposed for deflecting a hazardous Solar System object on a trajectory that would otherwise impact the Earth. The effectiveness of each technology depends on several characteristics of the given object, including its orbit and size. The distribution of these parameters in the likely population of Earth-impacting objects can thus determine which of the technologies are most likely to be useful in preventing a collision with the Earth. None of the proposed deflection technologies has been developed and fully tested in space. Developing every proposed technology is currently prohibitively expensive, so determining now which technologies are most likely to be effective would allow us to prioritize a subset of proposed deflection technologies for funding and development. We will present a new model, the Deflector Selector, that takes as its input the characteristics of a hazardous object or population of such objects and predicts which technology would be able to perform a successful deflection. The model consists of a machine-learning algorithm trained on data produced by N-body integrations simulating the deflections. We will describe the model and present the results of tests of the effectiveness of nuclear explosives, kinetic impactors, and gravity tractors on three simulated populations of hazardous objects.

  1. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  2. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  3. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  4. Earthquake Hazard and Risk in Alaska

    Science.gov (United States)

    Black Porto, N.; Nyst, M.

    2014-12-01

    Alaska is one of the most seismically active and tectonically diverse regions in the United States. To examine risk, we have updated the seismic hazard model in Alaska. The current RMS Alaska hazard model is based on the 2007 probabilistic seismic hazard maps for Alaska (Wesson et al., 2007; Boyd et al., 2007). The 2015 RMS model will update several key source parameters, including: extending the earthquake catalog, implementing a new set of crustal faults, updating the subduction zone geometry and reoccurrence rate. First, we extend the earthquake catalog to 2013; decluster the catalog, and compute new background rates. We then create a crustal fault model, based on the Alaska 2012 fault and fold database. This new model increased the number of crustal faults from ten in 2007, to 91 faults in the 2015 model. This includes the addition of: the western Denali, Cook Inlet folds near Anchorage, and thrust faults near Fairbanks. Previously the subduction zone was modeled at a uniform depth. In this update, we model the intraslab as a series of deep stepping events. We also use the best available data, such as Slab 1.0, to update the geometry of the subduction zone. The city of Anchorage represents 80% of the risk exposure in Alaska. In the 2007 model, the hazard in Alaska was dominated by the frequent rate of magnitude 7 to 8 events (Gutenberg-Richter distribution), and large magnitude 8+ events had a low reoccurrence rate (Characteristic) and therefore didn't contribute as highly to the overall risk. We will review these reoccurrence rates, and will present the results and impact to Anchorage. We will compare our hazard update to the 2007 USGS hazard map, and discuss the changes and drivers for these changes. Finally, we will examine the impact model changes have on Alaska earthquake risk. Consider risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of earthquake insurance (and premium levels), and the

  5. The use of animals as a surveillance tool for monitoring environmental health hazards, human health hazards and bioterrorism.

    Science.gov (United States)

    Neo, Jacqueline Pei Shan; Tan, Boon Huan

    2017-05-01

    This review discusses the utilization of wild or domestic animals as surveillance tools for monitoring naturally occurring environmental and human health hazards. Besides providing early warning to natural hazards, animals can also provide early warning to societal hazards like bioterrorism. Animals are ideal surveillance tools to humans because they share the same environment as humans and spend more time outdoors than humans, increasing their exposure risk. Furthermore, the biologically compressed lifespans of some animals may allow them to develop clinical signs more rapidly after exposure to specific pathogens. Animals are an excellent channel for monitoring novel and known pathogens with outbreak potential given that more than 60 % of emerging infectious diseases in humans originate as zoonoses. This review attempts to highlight animal illnesses, deaths, biomarkers or sentinel events, to remind human and veterinary public health programs that animal health can be used to discover, monitor or predict environmental health hazards, human health hazards, or bioterrorism. Lastly, we hope that this review will encourage the implementation of animals as a surveillance tool by clinicians, veterinarians, ecosystem health professionals, researchers and governments. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Considerations in comparing the U.S. Geological Survey one‐year induced‐seismicity hazard models with “Did You Feel It?” and instrumental data

    Science.gov (United States)

    White, Isabel; Liu, Taojun; Luco, Nicolas; Liel, Abbie

    2017-01-01

    The recent steep increase in seismicity rates in Oklahoma, southern Kansas, and other parts of the central United States led the U.S. Geological Survey (USGS) to develop, for the first time, a probabilistic seismic hazard forecast for one year (2016) that incorporates induced seismicity. In this study, we explore a process to ground‐truth the hazard model by comparing it with two databases of observations: modified Mercalli intensity (MMI) data from the “Did You Feel It?” (DYFI) system and peak ground acceleration (PGA) values from instrumental data. Because the 2016 hazard model was heavily based on earthquake catalogs from 2014 to 2015, this initial comparison utilized observations from these years. Annualized exceedance rates were calculated with the DYFI and instrumental data for direct comparison with the model. These comparisons required assessment of the options for converting hazard model results and instrumental data from PGA to MMI for comparison with the DYFI data. In addition, to account for known differences that affect the comparisons, the instrumental PGA and DYFI data were declustered, and the hazard model was adjusted for local site conditions. With these adjustments, examples at sites with the most data show reasonable agreement in the exceedance rates. However, the comparisons were complicated by the spatial and temporal completeness of the instrumental and DYFI observations. Furthermore, most of the DYFI responses are in the MMI II–IV range, whereas the hazard model is oriented toward forecasts at higher ground‐motion intensities, usually above about MMI IV. Nevertheless, the study demonstrates some of the issues that arise in making these comparisons, thereby informing future efforts to ground‐truth and improve hazard modeling for induced‐seismicity applications.

  7. Flash Flood Hazard Susceptibility Mapping Using Frequency Ratio and Statistical Index Methods in Coalmine Subsidence Areas

    Directory of Open Access Journals (Sweden)

    Chen Cao

    2016-09-01

    Full Text Available This study focused on producing flash flood hazard susceptibility maps (FFHSM using frequency ratio (FR and statistical index (SI models in the Xiqu Gully (XQG of Beijing, China. First, a total of 85 flash flood hazard locations (n = 85 were surveyed in the field and plotted using geographic information system (GIS software. Based on the flash flood hazard locations, a flood hazard inventory map was built. Seventy percent (n = 60 of the flooding hazard locations were randomly selected for building the models. The remaining 30% (n = 25 of the flooded hazard locations were used for validation. Considering that the XQG used to be a coal mining area, coalmine caves and subsidence caused by coal mining exist in this catchment, as well as many ground fissures. Thus, this study took the subsidence risk level into consideration for FFHSM. The ten conditioning parameters were elevation, slope, curvature, land use, geology, soil texture, subsidence risk area, stream power index (SPI, topographic wetness index (TWI, and short-term heavy rain. This study also tested different classification schemes for the values for each conditional parameter and checked their impacts on the results. The accuracy of the FFHSM was validated using area under the curve (AUC analysis. Classification accuracies were 86.61%, 83.35%, and 78.52% using frequency ratio (FR-natural breaks, statistical index (SI-natural breaks and FR-manual classification schemes, respectively. Associated prediction accuracies were 83.69%, 81.22%, and 74.23%, respectively. It was found that FR modeling using a natural breaks classification method was more appropriate for generating FFHSM for the Xiqu Gully.

  8. Comparison of hypertabastic survival model with other unimodal hazard rate functions using a goodness-of-fit test.

    Science.gov (United States)

    Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S

    2017-05-30

    We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Recent advances, and unresolved issues, in the application of computational modelling to the prediction of the biological effects of nanomaterials

    International Nuclear Information System (INIS)

    Winkler, David A.

    2016-01-01

    Nanomaterials research is one of the fastest growing contemporary research areas. The unprecedented properties of these materials have meant that they are being incorporated into products very quickly. Regulatory agencies are concerned they cannot assess the potential hazards of these materials adequately, as data on the biological properties of nanomaterials are still relatively limited and expensive to acquire. Computational modelling methods have much to offer in helping understand the mechanisms by which toxicity may occur, and in predicting the likelihood of adverse biological impacts of materials not yet tested experimentally. This paper reviews the progress these methods, particularly those QSAR-based, have made in understanding and predicting potentially adverse biological effects of nanomaterials, and also the limitations and pitfalls of these methods. - Highlights: • Nanomaterials regulators need good information to make good decisions. • Nanomaterials and their interactions with biology are very complex. • Computational methods use existing data to predict properties of new nanomaterials. • Statistical, data driven modelling methods have been successfully applied to this task. • Much more must be learnt before robust toolkits will be widely usable by regulators.

  10. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  11. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  12. An advanced model for spreading and evaporation of accidentally released hazardous liquids on land

    NARCIS (Netherlands)

    Trijssenaar-Buhre, I.J.M.; Sterkenburg, R.P.; Wijnant-Timmerman, S.I.

    2009-01-01

    Pool evaporation modelling is an important element in consequence assessment of accidentally released hazardous liquids. The evaporation rate determines the amount of toxic or flammable gas released into the atmosphere and is an important factor for the size of a pool fire. In this paper a

  13. An advanced model for spreading and evaporation of accidentally released hazardous liquids on land

    NARCIS (Netherlands)

    Trijssenaar-Buhre, I.J.M.; Wijnant-Timmerman, S.L.

    2008-01-01

    Pool evaporation modelling is an important element in consequence assessment of accidentally released hazardous liquids. The evaporation rate determines the amount of toxic or flammable gas released into the atmosphere and is an important factor for the size of a pool fire. In this paper a

  14. Level-Dependent Nonlinear Hearing Protector Model in the Auditory Hazard Assessment Algorithm for Humans

    Science.gov (United States)

    2015-04-01

    HPD model. In an article on measuring HPD attenuation, Berger (1986) points out that Real Ear Attenuation at Threshold (REAT) tests are...men. Audiology . 1991;30:345–356. Fedele P, Binseel M, Kalb J, Price GR. Using the auditory hazard assessment algorithm for humans (AHAAH) with

  15. Eastern US seismic hazard characterization update

    International Nuclear Information System (INIS)

    Savy, J.B.; Boissonnade, A.C.; Mensing, R.W.; Short, C.M.

    1993-06-01

    In January 1989, LLNL published the results of a multi-year project, funded by NRC, on estimating seismic hazard at nuclear plant sites east of the Rockies. The goal of this study was twofold: to develop a good central estimate (median) of the seismic hazard and to characterize the uncertainty in the estimates of this hazard. In 1989, LLNL was asked by DOE to develop site specific estimates of the seismic hazard at the Savannah River Site (SRS) in South Carolina as part of the New Production Reactor (NPR) project. For the purpose of the NPR, a complete review of the methodology and of the data acquisition process was performed. Work done under the NPR project has shown that first order improvement in the estimates of the uncertainty (i.e., lower mean hazard values) could be easily achieved by updating the modeling of the seismicity and ground motion attenuation uncertainty. To this effect, NRC sponsored LLNL to perform a reelicitation to update the seismicity and ground motion experts' inputs and to revise methods to combine seismicity and ground motion inputs in the seismic hazard analysis for nuclear power plant sites east of the Rocky Mountains. The objective of the recent study was to include the first order improvements that reflect the latest knowledge in seismicity and ground motion modeling and produce an update of all the hazard results produced in the 1989 study. In particular, it had been demonstrated that eliciting seismicity information in terms of rates of earthquakes rather than a- and b-values, and changing the elicitation format to a one-on-one interview, improved our ability to express the uncertainty of earthquake rates of occurrence at large magnitudes. Thus, NRC sponsored this update study to refine the model of uncertainty, and to re-elicitate of the experts' interpretations of the zonation and seismicity, as well as to reelicitate the ground motion models, based on current state of knowledge

  16. Methodologies for the assessment of earthquake-triggered landslides hazard. A comparison of Logistic Regression and Artificial Neural Network models.

    Science.gov (United States)

    García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.

    2009-04-01

    In recent years, interest in landslide hazard assessment studies has increased substantially. They are appropriate for evaluation and mitigation plan development in landslide-prone areas. There are several techniques available for landslide hazard research at a regional scale. Generally, they can be classified in two groups: qualitative and quantitative methods. Most of qualitative methods tend to be subjective, since they depend on expert opinions and represent hazard levels in descriptive terms. On the other hand, quantitative methods are objective and they are commonly used due to the correlation between the instability factors and the location of the landslides. Within this group, statistical approaches and new heuristic techniques based on artificial intelligence (artificial neural network (ANN), fuzzy logic, etc.) provide rigorous analysis to assess landslide hazard over large regions. However, they depend on qualitative and quantitative data, scale, types of movements and characteristic factors used. We analysed and compared an approach for assessing earthquake-triggered landslides hazard using logistic regression (LR) and artificial neural networks (ANN) with a back-propagation learning algorithm. One application has been developed in El Salvador, a country of Central America where the earthquake-triggered landslides are usual phenomena. In a first phase, we analysed the susceptibility and hazard associated to the seismic scenario of the 2001 January 13th earthquake. We calibrated the models using data from the landslide inventory for this scenario. These analyses require input variables representing physical parameters to contribute to the initiation of slope instability, for example, slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness, while the occurrence or non-occurrence of landslides is considered as dependent variable. The results of the landslide susceptibility analysis are checked using landslide

  17. A transparent and data-driven global tectonic regionalization model for seismic hazard assessment

    Science.gov (United States)

    Chen, Yen-Shin; Weatherill, Graeme; Pagani, Marco; Cotton, Fabrice

    2018-05-01

    A key concept that is common to many assumptions inherent within seismic hazard assessment is that of tectonic similarity. This recognizes that certain regions of the globe may display similar geophysical characteristics, such as in the attenuation of seismic waves, the magnitude scaling properties of seismogenic sources or the seismic coupling of the lithosphere. Previous attempts at tectonic regionalization, particularly within a seismic hazard assessment context, have often been based on expert judgements; in most of these cases, the process for delineating tectonic regions is neither reproducible nor consistent from location to location. In this work, the regionalization process is implemented in a scheme that is reproducible, comprehensible from a geophysical rationale, and revisable when new relevant data are published. A spatial classification-scheme is developed based on fuzzy logic, enabling the quantification of concepts that are approximate rather than precise. Using the proposed methodology, we obtain a transparent and data-driven global tectonic regionalization model for seismic hazard applications as well as the subjective probabilities (e.g. degree of being active/degree of being cratonic) that indicate the degree to which a site belongs in a tectonic category.

  18. Models for estimating the radiation hazards of uranium mines

    International Nuclear Information System (INIS)

    Wise, K.N.

    1982-01-01

    Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation or ingestion of uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined

  19. Models for estimating the radiation hazards of uranium mines

    International Nuclear Information System (INIS)

    Wise, K.N.

    1990-01-01

    Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation of ingestion or uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined. 34 refs., 12 tabs., 9 figs

  20. Evaluation of Heliostat Standby Aiming Strategies to Reduce Avian Flux Hazards and Impacts on Operational Performance

    Energy Technology Data Exchange (ETDEWEB)

    Wendelin, Timothy J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ho, Clifford K. [Sandia National Laboratories; Horstman, Luke [Sandia National Laboratories

    2017-06-03

    This paper presents a study of alternative heliostat standby aiming strategies and their impact on avian flux hazards and operational performance of a concentrating solar power plant. A mathematical model was developed that predicts the bird-feather temperature as a function of solar irradiance, thermal emittance, convection, and thermal properties of the feather. The irradiance distribution in the airspace above the Ivanpah Unit 2 heliostat field was simulated using a ray-trace model for two different times of the day, four days of the year, and nine different standby aiming strategies. The impact of the alternative aiming strategies on operational performance was assessed by comparing the heliostat slew times from standby position to the receiver for the different aiming strategies. Increased slew times increased a proxy start-up time that reduced the simulated annual energy production. Results showed that spreading the radial aim points around the receiver to a distance of ~150 m or greater reduced the hazardous exposure times that the feather temperature exceeded the hazard metric of 160 degrees C. The hazardous exposure times were reduced by ~23% and 90% at a radial spread of aim points extending to 150 m and 250 m, respectively, but the simulated annual energy production decreased as a result of increased slew times. Single point-focus aiming strategies were also evaluated, but these strategies increased the exposure hazard relative to other aiming strategies.

  1. Seismic Hazard of the Uttarakhand Himalaya, India, from Deterministic Modeling of Possible Rupture Planes in the Area

    Directory of Open Access Journals (Sweden)

    Anand Joshi

    2013-01-01

    Full Text Available This paper presents use of semiempirical method for seismic hazard zonation. The seismotectonically important region of Uttarakhand Himalaya has been considered in this work. Ruptures along the lineaments in the area identified from tectonic map are modeled deterministically using semi empirical approach given by Midorikawa (1993. This approach makes use of attenuation relation of peak ground acceleration for simulating strong ground motion at any site. Strong motion data collected over a span of three years in this region have been used to develop attenuation relation of peak ground acceleration of limited magnitude and distance applicability. The developed attenuation relation is used in the semi empirical method to predict peak ground acceleration from the modeled rupture planes in the area. A set of values of peak ground acceleration from possible ruptures in the area at the point of investigation is further used to compute probability of exceedance of peak ground acceleration of values 100 and 200 gals. The prepared map shows that regions like Tehri, Chamoli, Almora, Srinagar, Devprayag, Bageshwar, and Pauri fall in a zone of 10% probability of exceedence of peak ground acceleration of value 200 gals.

  2. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  3. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  4. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  5. Introducing Geoscience Students to Numerical Modeling of Volcanic Hazards: The example of Tephra2 on VHub.org

    Directory of Open Access Journals (Sweden)

    Leah M. Courtland

    2012-07-01

    Full Text Available The Tephra2 numerical model for tephra fallout from explosive volcanic eruptions is specifically designed to enable students to probe ideas in model literacy, including code validation and verification, the role of simplifying assumptions, and the concepts of uncertainty and forecasting. This numerical model is implemented on the VHub.org website, a venture in cyberinfrastructure that brings together volcanological models and educational materials. The VHub.org resource provides students with the ability to explore and execute sophisticated numerical models like Tephra2. We present a strategy for using this model to introduce university students to key concepts in the use and evaluation of Tephra2 for probabilistic forecasting of volcanic hazards. Through this critical examination students are encouraged to develop a deeper understanding of the applicability and limitations of hazard models. Although the model and applications are intended for use in both introductory and advanced geoscience courses, they could easily be adapted to work in other disciplines, such as astronomy, physics, computational methods, data analysis, or computer science.

  6. Performance and robustness of hybrid model predictive control for controllable dampers in building models

    Science.gov (United States)

    Johnson, Erik A.; Elhaddad, Wael M.; Wojtkiewicz, Steven F.

    2016-04-01

    A variety of strategies have been developed over the past few decades to determine controllable damping device forces to mitigate the response of structures and mechanical systems to natural hazards and other excitations. These "smart" damping devices produce forces through passive means but have properties that can be controlled in real time, based on sensor measurements of response across the structure, to dramatically reduce structural motion by exploiting more than the local "information" that is available to purely passive devices. A common strategy is to design optimal damping forces using active control approaches and then try to reproduce those forces with the smart damper. However, these design forces, for some structures and performance objectives, may achieve high performance by selectively adding energy, which cannot be replicated by a controllable damping device, causing the smart damper performance to fall far short of what an active system would provide. The authors have recently demonstrated that a model predictive control strategy using hybrid system models, which utilize both continuous and binary states (the latter to capture the switching behavior between dissipative and non-dissipative forces), can provide reductions in structural response on the order of 50% relative to the conventional clipped-optimal design strategy. This paper explores the robustness of this newly proposed control strategy through evaluating controllable damper performance when the structure model differs from the nominal one used to design the damping strategy. Results from the application to a two-degree-of-freedom structure model confirms the robustness of the proposed strategy.

  7. Progress for the Industry Application External Hazard Analyses Early Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Prescott, Steven [Idaho National Lab. (INL), Idaho Falls, ID (United States); Coleman, Justin [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ryan, Emerald [Idaho State Univ., Pocatello, ID (United States); Bhandari, Bishwo [Idaho State Univ., Pocatello, ID (United States); Sludern, Daniel [Idaho State Univ., Pocatello, ID (United States); Pope, Chad [Idaho State Univ., Pocatello, ID (United States); Sampath, Ram [Centroid PIC, Idaho Falls, ID (United States)

    2015-09-01

    This report describes the current progress and status related to the Industry Application #2 focusing on External Hazards. For this industry application within the Light Water Reactor Sustainability (LWRS) Program Risk-Informed Safety Margin Characterization (RISMC) R&D Pathway, we will create the Risk-Informed Margin Management (RIMM) approach to represent meaningful (i.e., realistic facility representation) event scenarios and consequences by using an advanced 3D facility representation that will evaluate external hazards such as flooding and earthquakes in order to identify, model and analyze the appropriate physics that needs to be included to determine plant vulnerabilities related to external events; manage the communication and interactions between different physics modeling and analysis technologies; and develop the computational infrastructure through tools related to plant representation, scenario depiction, and physics prediction. One of the unique aspects of the RISMC approach is how it couples probabilistic approaches (the scenario) with mechanistic phenomena representation (the physics) through simulation. This simulation-based modeling allows decision makers to focus on a variety of safety, performance, or economic metrics. In this report, we describe the evaluation of various physics toolkits related to flooding representation. Ultimately, we will be coupling the flooding representation with other events such as earthquakes in order to provide coupled physics analysis for scenarios where interactions exist.

  8. Improved Methods for Predicting Property Prices in Hazard Prone Dynamic Markets

    NARCIS (Netherlands)

    de Koning, Koen; Filatova, Tatiana; Bin, Okmyung

    Property prices are affected by changing market conditions, incomes and preferences of people. Price trends in natural hazard zones may shift significantly and abruptly after a disaster signalling structural systemic changes in property markets. It challenges accurate market assessments of property

  9. Coalbed methane: from hazard to resource

    Science.gov (United States)

    Flores, R.M.

    1998-01-01

    Coalbed gas, which mainly consists of methane, has remained a major hazard affecting safety and productivity in underground coal mines for more than 100 yr. Coalbed gas emissions have resulted in outbursts and explosions where ignited by open lights, smoking or improper use of black blasting powder, and machinery operations. Investigations of coal gas outbursts and explosions during the past century were aimed at predicting and preventing this mine hazard. During this time, gas emissions were diluted with ventilation by airways (eg, tunnels, vertical and horizontal drillholes, shsfts) and by drainage boreholes. The 1970s 'energy crisis' led to studies of the feasibility of producing the gas for commercial use. Subsequent research on the origin, accumulation, distribution, availability, and recoverability has been pursued vigorously during the past two decades. Since the 1970s research investigations on the causes and effects of coal mine outbursts and gas emissions have led to major advances towards the recovery and development of coalbed methane for commercial use. Thus, coalbed methane as a mining hazard was harnessed as a conventional gas resource.Coalbed gas, which mainly consists of methane, has remained a major hazard affecting safety and productivity in underground coal mines for more than 100 years. Coalbed gas emissions have resulted in outbursts and explosions where ignited by open lights, smoking or improper use of black blasting powder, and machinery operations. Investigations of coal gas outbursts and explosions during the past century were aimed at predicting and preventing this mine hazard. During this time, gas emissions were diluted with ventilation by airways (e.g., tunnels, vertical and horizontal drillholes, shafts) and by drainage boreholes. The 1970's `energy crisis' led to studies of the feasibility of producing the gas for commercial use. Subsequent research on the origin, accumulation, distribution, availability, and recoverability has been

  10. Insights into earthquake hazard map performance from shaking history simulations

    Science.gov (United States)

    Stein, S.; Vanneste, K.; Camelbeeck, T.; Vleminckx, B.

    2017-12-01

    Why recent large earthquakes caused shaking stronger than predicted by earthquake hazard maps is under debate. This issue has two parts. Verification involves how well maps implement probabilistic seismic hazard analysis (PSHA) ("have we built the map right?"). Validation asks how well maps forecast shaking ("have we built the right map?"). We explore how well a map can ideally perform by simulating an area's shaking history and comparing "observed" shaking to that predicted by a map generated for the same parameters. The simulations yield shaking distributions whose mean is consistent with the map, but individual shaking histories show large scatter. Infrequent large earthquakes cause shaking much stronger than mapped, as observed. Hence, PSHA seems internally consistent and can be regarded as verified. Validation is harder because an earthquake history can yield shaking higher or lower than that predicted while being consistent with the hazard map. The scatter decreases for longer observation times because the largest earthquakes and resulting shaking are increasingly likely to have occurred. For the same reason, scatter is much less for the more active plate boundary than for a continental interior. For a continental interior, where the mapped hazard is low, even an M4 event produces exceedances at some sites. Larger earthquakes produce exceedances at more sites. Thus many exceedances result from small earthquakes, but infrequent large ones may cause very large exceedances. However, for a plate boundary, an M6 event produces exceedance at only a few sites, and an M7 produces them in a larger, but still relatively small, portion of the study area. As reality gives only one history, and a real map involves assumptions about more complicated source geometries and occurrence rates, which are unlikely to be exactly correct and thus will contribute additional scatter, it is hard to assess whether misfit between actual shaking and a map — notably higher

  11. Toward uniform probabilistic seismic hazard assessments for Southeast Asia

    Science.gov (United States)

    Chan, C. H.; Wang, Y.; Shi, X.; Ornthammarath, T.; Warnitchai, P.; Kosuwan, S.; Thant, M.; Nguyen, P. H.; Nguyen, L. M.; Solidum, R., Jr.; Irsyam, M.; Hidayati, S.; Sieh, K.

    2017-12-01

    Although most Southeast Asian countries have seismic hazard maps, various methodologies and quality result in appreciable mismatches at national boundaries. We aim to conduct a uniform assessment across the region by through standardized earthquake and fault databases, ground-shaking scenarios, and regional hazard maps. Our earthquake database contains earthquake parameters obtained from global and national seismic networks, harmonized by removal of duplicate events and the use of moment magnitude. Our active-fault database includes fault parameters from previous studies and from the databases implemented for national seismic hazard maps. Another crucial input for seismic hazard assessment is proper evaluation of ground-shaking attenuation. Since few ground-motion prediction equations (GMPEs) have used local observations from this region, we evaluated attenuation by comparison of instrumental observations and felt intensities for recent earthquakes with predicted ground shaking from published GMPEs. We then utilize the best-fitting GMPEs and site conditions into our seismic hazard assessments. Based on the database and proper GMPEs, we have constructed regional probabilistic seismic hazard maps. The assessment shows highest seismic hazard levels near those faults with high slip rates, including the Sagaing Fault in central Myanmar, the Sumatran Fault in Sumatra, the Palu-Koro, Matano and Lawanopo Faults in Sulawesi, and the Philippine Fault across several islands of the Philippines. In addition, our assessment demonstrates the important fact that regions with low earthquake probability may well have a higher aggregate probability of future earthquakes, since they encompass much larger areas than the areas of high probability. The significant irony then is that in areas of low to moderate probability, where building codes are usually to provide less seismic resilience, seismic risk is likely to be greater. Infrastructural damage in East Malaysia during the 2015

  12. The 2018 and 2020 Updates of the U.S. National Seismic Hazard Models

    Science.gov (United States)

    Petersen, M. D.

    2017-12-01

    During 2018 the USGS will update the 2014 National Seismic Hazard Models by incorporating new seismicity models, ground motion models, site factors, fault inputs, and by improving weights to ground motion models using empirical and other data. We will update the earthquake catalog for the U.S. and introduce new rate models. Additional fault data will be used to improve rate estimates on active faults. New ground motion models (GMMs) and site factors for Vs30 have been released by the Pacific Earthquake Engineering Research Center (PEER) and we will consider these in assessing ground motions in craton and extended margin regions of the central and eastern U.S. The USGS will also include basin-depth terms for selected urban areas of the western United States to improve long-period shaking assessments using published depth estimates to 1.0 and 2.5 km/s shear wave velocities. We will produce hazard maps for input into the building codes that span a broad range of periods (0.1 to 5 s) and site classes (shear wave velocity from 2000 m/s to 200 m/s in the upper 30 m of the crust, Vs30). In the 2020 update we plan on including: a new national crustal model that defines basin depths required in the latest GMMs, new 3-D ground motion simulations for several urban areas, new magnitude-area equations, and new fault geodetic and geologic strain rate models. The USGS will also consider including new 3-D ground motion simulations for inclusion in these long-period maps. These new models are being evaluated and will be discussed at one or more regional and topical workshops held at the beginning of 2018.

  13. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  14. Seismic hazard assessment of Iran

    Directory of Open Access Journals (Sweden)

    M. Ghafory-Ashtiany

    1999-06-01

    Full Text Available The development of the new seismic hazard map of Iran is based on probabilistic seismic hazard computation using the historical earthquakes data, geology, tectonics, fault activity and seismic source models in Iran. These maps have been prepared to indicate the earthquake hazard of Iran in the form of iso-acceleration contour lines, and seismic hazard zoning, by using current probabilistic procedures. They display the probabilistic estimates of Peak Ground Acceleration (PGA for the return periods of 75 and 475 years. The maps have been divided into intervals of 0.25 degrees in both latitudinal and longitudinal directions to calculate the peak ground acceleration values at each grid point and draw the seismic hazard curves. The results presented in this study will provide the basis for the preparation of seismic risk maps, the estimation of earthquake insurance premiums, and the preliminary site evaluation of critical facilities.

  15. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Tsunami hazard map in eastern Bali

    International Nuclear Information System (INIS)

    Afif, Haunan; Cipta, Athanasius

    2015-01-01

    Bali is a popular tourist destination both for Indonesian and foreign visitors. However, Bali is located close to the collision zone between the Indo-Australian Plate and Eurasian Plate in the south and back-arc thrust off the northern coast of Bali resulted Bali prone to earthquake and tsunami. Tsunami hazard map is needed for better understanding of hazard level in a particular area and tsunami modeling is one of the most reliable techniques to produce hazard map. Tsunami modeling conducted using TUNAMI N2 and set for two tsunami sources scenarios which are subduction zone in the south of Bali and back thrust in the north of Bali. Tsunami hazard zone is divided into 3 zones, the first is a high hazard zones with inundation height of more than 3m. The second is a moderate hazard zone with inundation height 1 to 3m and the third is a low tsunami hazard zones with tsunami inundation heights less than 1m. Those 2 scenarios showed southern region has a greater potential of tsunami impact than the northern areas. This is obviously shown in the distribution of the inundated area in the south of Bali including the island of Nusa Penida, Nusa Lembongan and Nusa Ceningan is wider than in the northern coast of Bali although the northern region of the Nusa Penida Island more inundated due to the coastal topography

  17. Tsunami hazard map in eastern Bali

    Energy Technology Data Exchange (ETDEWEB)

    Afif, Haunan, E-mail: afif@vsi.esdm.go.id [Geological Agency, Bandung (Indonesia); Cipta, Athanasius [Geological Agency, Bandung (Indonesia); Australian National University, Canberra (Australia)

    2015-04-24

    Bali is a popular tourist destination both for Indonesian and foreign visitors. However, Bali is located close to the collision zone between the Indo-Australian Plate and Eurasian Plate in the south and back-arc thrust off the northern coast of Bali resulted Bali prone to earthquake and tsunami. Tsunami hazard map is needed for better understanding of hazard level in a particular area and tsunami modeling is one of the most reliable techniques to produce hazard map. Tsunami modeling conducted using TUNAMI N2 and set for two tsunami sources scenarios which are subduction zone in the south of Bali and back thrust in the north of Bali. Tsunami hazard zone is divided into 3 zones, the first is a high hazard zones with inundation height of more than 3m. The second is a moderate hazard zone with inundation height 1 to 3m and the third is a low tsunami hazard zones with tsunami inundation heights less than 1m. Those 2 scenarios showed southern region has a greater potential of tsunami impact than the northern areas. This is obviously shown in the distribution of the inundated area in the south of Bali including the island of Nusa Penida, Nusa Lembongan and Nusa Ceningan is wider than in the northern coast of Bali although the northern region of the Nusa Penida Island more inundated due to the coastal topography.

  18. Tsunami hazard map in eastern Bali

    Science.gov (United States)

    Afif, Haunan; Cipta, Athanasius

    2015-04-01

    Bali is a popular tourist destination both for Indonesian and foreign visitors. However, Bali is located close to the collision zone between the Indo-Australian Plate and Eurasian Plate in the south and back-arc thrust off the northern coast of Bali resulted Bali prone to earthquake and tsunami. Tsunami hazard map is needed for better understanding of hazard level in a particular area and tsunami modeling is one of the most reliable techniques to produce hazard map. Tsunami modeling conducted using TUNAMI N2 and set for two tsunami sources scenarios which are subduction zone in the south of Bali and back thrust in the north of Bali. Tsunami hazard zone is divided into 3 zones, the first is a high hazard zones with inundation height of more than 3m. The second is a moderate hazard zone with inundation height 1 to 3m and the third is a low tsunami hazard zones with tsunami inundation heights less than 1m. Those 2 scenarios showed southern region has a greater potential of tsunami impact than the northern areas. This is obviously shown in the distribution of the inundated area in the south of Bali including the island of Nusa Penida, Nusa Lembongan and Nusa Ceningan is wider than in the northern coast of Bali although the northern region of the Nusa Penida Island more inundated due to the coastal topography.

  19. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  20. A multivariate model for predicting segmental body composition.

    Science.gov (United States)

    Tian, Simiao; Mioche, Laurence; Denis, Jean-Baptiste; Morio, Béatrice

    2013-12-01

    The aims of the present study were to propose a multivariate model for predicting simultaneously body, trunk and appendicular fat and lean masses from easily measured variables and to compare its predictive capacity with that of the available univariate models that predict body fat percentage (BF%). The dual-energy X-ray absorptiometry (DXA) dataset (52% men and 48% women) with White, Black and Hispanic ethnicities (1999-2004, National Health and Nutrition Examination Survey) was randomly divided into three sub-datasets: a training dataset (TRD), a test dataset (TED); a validation dataset (VAD), comprising 3835, 1917 and 1917 subjects. For each sex, several multivariate prediction models were fitted from the TRD using age, weight, height and possibly waist circumference. The most accurate model was selected from the TED and then applied to the VAD and a French DXA dataset (French DB) (526 men and 529 women) to assess the prediction accuracy in comparison with that of five published univariate models, for which adjusted formulas were re-estimated using the TRD. Waist circumference was found to improve the prediction accuracy, especially in men. For BF%, the standard error of prediction (SEP) values were 3.26 (3.75) % for men and 3.47 (3.95)% for women in the VAD (French DB), as good as those of the adjusted univariate models. Moreover, the SEP values for the prediction of body and appendicular lean masses ranged from 1.39 to 2.75 kg for both the sexes. The prediction accuracy was best for age < 65 years, BMI < 30 kg/m2 and the Hispanic ethnicity. The application of our multivariate model to large populations could be useful to address various public health issues.

  1. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Estimating hurricane hazards using a GIS system

    Directory of Open Access Journals (Sweden)

    A. Taramelli

    2008-08-01

    Full Text Available This paper develops a GIS-based integrated approach to the Multi-Hazard model method, with reference to hurricanes. This approach has three components: data integration, hazard assessment and score calculation to estimate elements at risk such as affected area and affected population. First, spatial data integration issues within a GIS environment, such as geographical scales and data models, are addressed. Particularly, the integration of physical parameters and population data is achieved linking remotely sensed data with a high resolution population distribution in GIS. In order to assess the number of affected people, involving heterogeneous data sources, the selection of spatial analysis units is basic. Second, specific multi-hazard tasks, such as hazard behaviour simulation and elements at risk assessment, are composed in order to understand complex hazard and provide support for decision making. Finally, the paper concludes that the integrated approach herein presented can be used to assist emergency management of hurricane consequences, in theory and in practice.

  3. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  4. Use of agent-based modelling in emergency management under a range of flood hazards

    Directory of Open Access Journals (Sweden)

    Tagg Andrew

    2016-01-01

    Full Text Available The Life Safety Model (LSM was developed some 15 years ago, originally for dam break assessments and for informing reservoir evacuation and emergency plans. Alongside other technological developments, the model has evolved into a very useful agent-based tool, with many applications for a range of hazards and receptor behaviour. HR Wallingford became involved in its use in 2006, and is now responsible for its technical development and commercialisation. Over the past 10 years the model has been applied to a range of flood hazards, including coastal surge, river flood, dam failure and tsunami, and has been verified against historical events. Commercial software licences are being used in Canada, Italy, Malaysia and Australia. A core group of LSM users and analysts has been specifying and delivering a programme of model enhancements. These include improvements to traffic behaviour at intersections, new algorithms for sheltering in high-rise buildings, and the addition of monitoring points to allow detailed analysis of vehicle and pedestrian movement. Following user feedback, the ability of LSM to handle large model ‘worlds’ and hydrodynamic meshes has been improved. Recent developments include new documentation, performance enhancements, better logging of run-time events and bug fixes. This paper describes some of the recent developments and summarises some of the case study applications, including dam failure analysis in Japan and mass evacuation simulation in England.

  5. A spectral clustering search algorithm for predicting shallow landslide size and location

    Science.gov (United States)

    Dino Bellugi; David G. Milledge; William E. Dietrich; Jim A. McKean; J. Taylor Perron; Erik B. Sudderth; Brian Kazian

    2015-01-01

    The potential hazard and geomorphic significance of shallow landslides depend on their location and size. Commonly applied one-dimensional stability models do not include lateral resistances and cannot predict landslide size. Multi-dimensional models must be applied to specific geometries, which are not known a priori, and testing all possible geometries is...

  6. A real-time crash prediction model for the ramp vicinities of urban expressways

    Directory of Open Access Journals (Sweden)

    Moinul Hossain

    2013-07-01

    Full Text Available Ramp vicinities are arguably the known black-spots on urban expressways. There, while maintaining high speed, drivers need to respond to several complex events such as maneuvering, reading road signs, route planning and maintaining safe distance from other maneuvering vehicles simultaneously which demand higher level of cognitive response to ensure safety. Therefore, any additional discomfort caused by traffic dynamics may induce driving error resulting in a crash. This manuscript presents a methodology for identifying these dynamically forming hazardous traffic conditions near the ramp vicinities with high resolution real-time traffic flow data. It separates the ramp vicinities into four zones – upstream and downstream of entrance and exit ramps, and builds four separate real-time crash prediction models. Around two year (December 2007 to October 2009 crash data as well as their matching traffic sensor data from Shibuya 3 and Shinjuku 4 expressways under the jurisdiction of Tokyo Metropolitan Expressway Company Limited have been utilized for this research. Random multinomial logit, a forest of multinomial logit models, has been used to identify the most important variables. Finally, a real-time modeling method, Bayesian belief net (BBN, has been employed to build the four models using ramp flow, flow and congestion index in the upstream and flow and speed in the downstream of the ramp location as variables. The newly proposed models could predict 50%, 42%, 43% and 55% of the future crashes with around 10% false alarm for the downstream of entrance, downstream of exit, upstream of entrance and upstream of exit ramps respectively. The models can be utilized in combination with various traffic smoothing measures such as ramp metering, variable speed limit, warning messages through variable message signs, etc. to enhance safety near the ramp vicinities.

  7. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  8. Seismic hazards and land-use planning

    Science.gov (United States)

    Nichols, Donald R.; Buchanan-Banks, Jane M.

    1974-01-01

    Basic earth-science data are necessary for a realistic assessment of seismic hazards and as a basis for limiting corrective land-use controls only to those areas of greatest hazard. For example, the location, character, and amount of likely displacement and activity of surface faulting can be predicted if detailed geologic maps and seismic data are available and are augmented by field studies at critical localities. Because few structures can withstand displacement of their foundations, they should be located off active fault traces, the distance varying with the character of faulting, the certainty with which fault traces are known, and the importance of the structure. Recreational activities and other nonoccupancy land uses should be considered for fault zone areas where land is under pressure for development; elsewhere, such areas should remain as open space. Two methods of predicting ground shaking effects have applications to land-use decisions: (1) Relative earthquake effects can be related to firmness of the ground and can be used in a gross way to allocate population density in the absence of more sophisticated analyses; and (2) intensity maps, based on, (a) damage from former earthquakes, or (b) a qualitative analyses of geologic units added to a design earthquake, can be helpful both for general and specific plans. Theoretical models are used with caution to predict ground motion for critical structures to be located at specific sites with unique foundation conditions. Fully adequate methods of assessing possible shaking remain to be developed. Where land-use decisions do not reflect likely ground shaking effects, stringent building codes are needed, particularly for important structures. Ground failure (landsliding, ground cracking and lurching, differential settlement, sand boils, and subsidence) commonly results from liquefaction, loss of soil strength, or compaction. Areas suspected of being most likely to fail should not be developed unless detailed

  9. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  10. Winter wheat response to irrigation, nitrogen fertilization, and cold hazards in the Community Land Model 5

    Science.gov (United States)

    Lu, Y.

    2017-12-01

    Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of earth's croplands. As such, it plays an important role in soil carbon balance, and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under changing climate, but also for understanding the energy and water cycles for winter wheat dominated regions. A winter wheat growth model has been developed in the Community Land Model 4.5 (CLM4.5), but its responses to irrigation and nitrogen fertilization have not been validated. In this study, I will validate winter wheat growth response to irrigation and nitrogen fertilization at five winter wheat field sites (TXLU, KSMA, NESA, NDMA, and ABLE) in North America, which were originally designed to understand winter wheat response to nitrogen fertilization and water treatments (4 nitrogen levels and 3 irrigation regimes). I also plan to further update the linkages between winter wheat yield and cold hazards. The previous cold damage function only indirectly affects yield through reduction on leaf area index (LAI) and hence photosynthesis, such approach could sometimes produce an unwanted higher yield when the reduced LAI saved more nutrient in the grain fill stage.

  11. Earthquake Hazard and Risk in New Zealand

    Science.gov (United States)

    Apel, E. V.; Nyst, M.; Fitzenz, D. D.; Molas, G.

    2014-12-01

    To quantify risk in New Zealand we examine the impact of updating the seismic hazard model. The previous RMS New Zealand hazard model is based on the 2002 probabilistic seismic hazard maps for New Zealand (Stirling et al., 2002). The 2015 RMS model, based on Stirling et al., (2012) will update several key source parameters. These updates include: implementation a new set of crustal faults including multi-segment ruptures, updating the subduction zone geometry and reccurrence rate and implementing new background rates and a robust methodology for modeling background earthquake sources. The number of crustal faults has increased by over 200 from the 2002 model, to the 2012 model which now includes over 500 individual fault sources. This includes the additions of many offshore faults in northern, east-central, and southwest regions. We also use the recent data to update the source geometry of the Hikurangi subduction zone (Wallace, 2009; Williams et al., 2013). We compare hazard changes in our updated model with those from the previous version. Changes between the two maps are discussed as well as the drivers for these changes. We examine the impact the hazard model changes have on New Zealand earthquake risk. Considered risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of earthquake insurance (and premium levels), and the loss exceedance probability curve used by insurers to address their solvency and manage their portfolio risk. We analyze risk profile changes in areas with large population density and for structures of economic and financial importance. New Zealand is interesting in that the city with the majority of the risk exposure in the country (Auckland) lies in the region of lowest hazard, where we don't have a lot of information about the location of faults and distributed seismicity is modeled by averaged Mw-frequency relationships on area sources. Thus small changes to the background rates

  12. Exploration of resilience assessments for natural hazards

    Science.gov (United States)

    Lo Jacomo, Anna; Han, Dawei; Champneys, Alan

    2017-04-01

    The occurrence of extreme events due to natural hazards is difficult to predict. Extreme events are stochastic in nature, there is a lack of long term data on their occurrence, and there are still gaps in our understanding of their physical processes. This difficulty in prediction will be exacerbated by climate change and human activities. Yet traditional risk assessments measure risk as the probability of occurrence of a hazard, multiplied by the consequences of the hazard occurring, which ignores the recovery process. In light of the increasing concerns on disaster risks and the related system recovery, resilience assessments are being used as an approach which complements and builds on traditional risk assessments and management. In mechanical terms, resilience refers to the amount of energy per unit volume that a material can absorb while maintaining its ability to return to its original shape. Resilience was first applied in the fields of psychology and ecology, and more recently has been used in areas such as social sciences, economics, and engineering. A common metaphor for understanding resilience is the stability landscape. The landscape consists of a surface of interconnected basins, where each basin represents different states of a system, which is a point on the stability landscape. The resilience of the system is its capacity and tendency to remain within a particular basin. This depends on the topology of the landscape, on the system's current position, and on its reaction to different shocks and stresses. In practical terms, resilience assessments have been conducted for various purposes in different sectors. These assessments vary in their required inputs, the methodologies applied, and the output they produce. Some measures used for resilience assessments are hazard independent. These focus on the intrinsic capabilities of a system, for example the insurance coverage of a community, or the buffer capacity of a water storage reservoir. Other

  13. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  15. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  16. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  17. Hazards to nuclear plants from surface traffic accidents

    International Nuclear Information System (INIS)

    Hornyik, K.

    1975-01-01

    Analytic models have been developed for evaluating hazards to nuclear plants from hazardous-materials accidents in the vicinity of the plant. In particular, these models permit the evaluation of hazards from such accidents occurring on surface traffic routes near the plant. The analysis uses statistical information on accident rates, traffic frequency, and cargo-size distribution along with parameters describing properties of the hazardous cargo, plant design, and atmospheric conditions, to arrive at a conservative estimate of the annual probability of a catastrophic event. Two of the major effects associated with hazardous-materials accidents, explosion and release of toxic vapors, are treated by a common formalism which can be readily applied to any given case by means of a graphic procedure. As an example, for a typical case it is found that railroad shipments of chlorine in 55-ton tank cars constitute a greater hazard to a nearby nuclear plant than equally frequent rail shipments of explosives in amounts of 10 tons. 11 references. (U.S.)

  18. Proportional hazards model with varying coefficients for length-biased data.

    Science.gov (United States)

    Zhang, Feipeng; Chen, Xuerong; Zhou, Yong

    2014-01-01

    Length-biased data arise in many important applications including epidemiological cohort studies, cancer prevention trials and studies of labor economics. Such data are also often subject to right censoring due to loss of follow-up or the end of study. In this paper, we consider a proportional hazards model with varying coefficients for right-censored and length-biased data, which is used to study the interact effect nonlinearly of covariates with an exposure variable. A local estimating equation method is proposed for the unknown coefficients and the intercept function in the model. The asymptotic properties of the proposed estimators are established by using the martingale theory and kernel smoothing techniques. Our simulation studies demonstrate that the proposed estimators have an excellent finite-sample performance. The Channing House data is analyzed to demonstrate the applications of the proposed method.

  19. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  20. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  1. A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  2. STakeholder-Objective Risk Model (STORM): Determiningthe aggregated risk of multiple contaminant hazards in groundwater well catchments

    DEFF Research Database (Denmark)

    Enzenhoefer, R.; Binning, Philip John; Nowak, W.

    2015-01-01

    Risk is often defined as the product of probability, vulnerability and value. Drinking water supply from groundwater abstraction is often at risk due to multiple hazardous land use activities in the well catchment. Each hazard might or might not introduce contaminants into the subsurface at any......-pathway-receptor concept, mass-discharge-based aggregation of stochastically occuring spill events, accounts for uncertainties in the involved flow and transport models through Monte Carlo simulation, and can address different stakeholder objectives. We illustrate the application of STORM in a numerical test case inspired...

  3. Modeling retrospective attribution of responsibility to hazard-managing institutions: an example involving a food contamination incident.

    Science.gov (United States)

    Johnson, Branden B; Hallman, William K; Cuite, Cara L

    2015-03-01

    Perceptions of institutions that manage hazards are important because they can affect how the public responds to hazard events. Antecedents of trust judgments have received far more attention than antecedents of attributions of responsibility for hazard events. We build upon a model of retrospective attribution of responsibility to individuals to examine these relationships regarding five classes of institutions that bear responsibility for food safety: producers (e.g., farmers), processors (e.g., packaging firms), watchdogs (e.g., government agencies), sellers (e.g., supermarkets), and preparers (e.g., restaurants). A nationally representative sample of 1,200 American adults completed an Internet-based survey in which a hypothetical scenario involving contamination of diverse foods with Salmonella served as the stimulus event. Perceived competence and good intentions of the institution moderately decreased attributions of responsibility. A stronger factor was whether an institution was deemed (potentially) aware of the contamination and free to act to prevent or mitigate it. Responsibility was rated higher the more aware and free the institution. This initial model for attributions of responsibility to impersonal institutions (as opposed to individual responsibility) merits further development. © 2014 Society for Risk Analysis.

  4. Risk predictive modelling for diabetes and cardiovascular disease.

    Science.gov (United States)

    Kengne, Andre Pascal; Masconi, Katya; Mbanya, Vivian Nchanchou; Lekoubou, Alain; Echouffo-Tcheugui, Justin Basile; Matsha, Tandi E

    2014-02-01

    Absolute risk models or clinical prediction models have been incorporated in guidelines, and are increasingly advocated as tools to assist risk stratification and guide prevention and treatments decisions relating to common health conditions such as cardiovascular disease (CVD) and diabetes mellitus. We have reviewed the historical development and principles of prediction research, including their statistical underpinning, as well as implications for routine practice, with a focus on predictive modelling for CVD and diabetes. Predictive modelling for CVD risk, which has developed over the last five decades, has been largely influenced by the Framingham Heart Study investigators, while it is only ∼20 years ago that similar efforts were started in the field of diabetes. Identification of predictive factors is an important preliminary step which provides the knowledge base on potential predictors to be tested for inclusion during the statistical derivation of the final model. The derived models must then be tested both on the development sample (internal validation) and on other populations in different settings (external validation). Updating procedures (e.g. recalibration) should be used to improve the performance of models that fail the tests of external validation. Ultimately, the effect of introducing validated models in routine practice on the process and outcomes of care as well as its cost-effectiveness should be tested in impact studies before wide dissemination of models beyond the research context. Several predictions models have been developed for CVD or diabetes, but very few have been externally validated or tested in impact studies, and their comparative performance has yet to be fully assessed. A shift of focus from developing new CVD or diabetes prediction models to validating the existing ones will improve their adoption in routine practice.

  5. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  6. Bibliography - Existing Guidance for External Hazard Modelling

    International Nuclear Information System (INIS)

    Decker, Kurt

    2015-01-01

    The bibliography of deliverable D21.1 includes existing international and national guidance documents and standards on external hazard assessment together with a selection of recent scientific papers, which are regarded to provide useful information on the state of the art of external event modelling. The literature database is subdivided into International Standards, National Standards, and Science Papers. The deliverable is treated as a 'living document' which is regularly updated as necessary during the lifetime of ASAMPSA-E. The current content of the database is about 140 papers. Most of the articles are available as full-text versions in PDF format. The deliverable is available as an EndNote X4 database and as text files. The database includes the following information: Reference, Key words, Abstract (if available), PDF file of the original paper (if available), Notes (comments by the ASAMPSA-E consortium if available) The database is stored at the ASAMPSA-E FTP server hosted by IRSN. PDF files of original papers are accessible through the EndNote software

  7. Survival prediction model for postoperative hepatocellular carcinoma patients.

    Science.gov (United States)

    Ren, Zhihui; He, Shasha; Fan, Xiaotang; He, Fangping; Sang, Wei; Bao, Yongxing; Ren, Weixin; Zhao, Jinming; Ji, Xuewen; Wen, Hao

    2017-09-01

    This study is to establish a predictive index (PI) model of 5-year survival rate for patients with hepatocellular carcinoma (HCC) after radical resection and to evaluate its prediction sensitivity, specificity, and accuracy.Patients underwent HCC surgical resection were enrolled and randomly divided into prediction model group (101 patients) and model evaluation group (100 patients). Cox regression model was used for univariate and multivariate survival analysis. A PI model was established based on multivariate analysis and receiver operating characteristic (ROC) curve was drawn accordingly. The area under ROC (AUROC) and PI cutoff value was identified.Multiple Cox regression analysis of prediction model group showed that neutrophil to lymphocyte ratio, histological grade, microvascular invasion, positive resection margin, number of tumor, and postoperative transcatheter arterial chemoembolization treatment were the independent predictors for the 5-year survival rate for HCC patients. The model was PI = 0.377 × NLR + 0.554 × HG + 0.927 × PRM + 0.778 × MVI + 0.740 × NT - 0.831 × transcatheter arterial chemoembolization (TACE). In the prediction model group, AUROC was 0.832 and the PI cutoff value was 3.38. The sensitivity, specificity, and accuracy were 78.0%, 80%, and 79.2%, respectively. In model evaluation group, AUROC was 0.822, and the PI cutoff value was well corresponded to the prediction model group with sensitivity, specificity, and accuracy of 85.0%, 83.3%, and 84.0%, respectively.The PI model can quantify the mortality risk of hepatitis B related HCC with high sensitivity, specificity, and accuracy.

  8. Development of a risk prediction model for lung cancer: The Japan Public Health Center-based Prospective Study.

    Science.gov (United States)

    Charvat, Hadrien; Sasazuki, Shizuka; Shimazu, Taichi; Budhathoki, Sanjeev; Inoue, Manami; Iwasaki, Motoki; Sawada, Norie; Yamaji, Taiki; Tsugane, Shoichiro

    2018-03-01

    Although the impact of tobacco consumption on the occurrence of lung cancer is well-established, risk estimation could be improved by risk prediction models that consider various smoking habits, such as quantity, duration, and time since quitting. We constructed a risk prediction model using a population of 59 161 individuals from the Japan Public Health Center (JPHC) Study Cohort II. A parametric survival model was used to assess the impact of age, gender, and smoking-related factors (cumulative smoking intensity measured in pack-years, age at initiation, and time since cessation). Ten-year cumulative probability of lung cancer occurrence estimates were calculated with consideration of the competing risk of death from other causes. Finally, the model was externally validated using 47 501 individuals from JPHC Study Cohort I. A total of 1210 cases of lung cancer occurred during 986 408 person-years of follow-up. We found a dose-dependent effect of tobacco consumption with hazard ratios for current smokers ranging from 3.78 (2.00-7.16) for cumulative consumption ≤15 pack-years to 15.80 (9.67-25.79) for >75 pack-years. Risk decreased with time since cessation. Ten-year cumulative probability of lung cancer occurrence estimates ranged from 0.04% to 11.14% in men and 0.07% to 6.55% in women. The model showed good predictive performance regarding discrimination (cross-validated c-index = 0.793) and calibration (cross-validated χ 2 = 6.60; P-value = .58). The model still showed good discrimination in the external validation population (c-index = 0.772). In conclusion, we developed a prediction model to estimate the probability of developing lung cancer based on age, gender, and tobacco consumption. This model appears useful in encouraging high-risk individuals to quit smoking and undergo increased surveillance. © 2018 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.

  9. Methodology for Designing Models Predicting Success of Infertility Treatment

    OpenAIRE

    Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad

    2016-01-01

    Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...

  10. A new approach to hazardous materials transportation risk analysis: decision modeling to identify critical variables.

    Science.gov (United States)

    Clark, Renee M; Besterfield-Sacre, Mary E

    2009-03-01

    We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk.

  11. Physics-Based Simulations of Natural Hazards

    Science.gov (United States)

    Schultz, Kasey William

    Earthquakes and tsunamis are some of the most damaging natural disasters that we face. Just two recent events, the 2004 Indian Ocean earthquake and tsunami and the 2011 Haiti earthquake, claimed more than 400,000 lives. Despite their catastrophic impacts on society, our ability to predict these natural disasters is still very limited. The main challenge in studying the earthquake cycle is the non-linear and multi-scale properties of fault networks. Earthquakes are governed by physics across many orders of magnitude of spatial and temporal scales; from the scale of tectonic plates and their evolution over millions of years, down to the scale of rock fracturing over milliseconds to minutes at the sub-centimeter scale during an earthquake. Despite these challenges, there are useful patterns in earthquake occurrence. One such pattern, the frequency-magnitude relation, relates the number of large earthquakes to small earthquakes and forms the basis for assessing earthquake hazard. However the utility of these relations is proportional to the length of our earthquake records, and typical records span at most a few hundred years. Utilizing physics based interactions and techniques from statistical physics, earthquake simulations provide rich earthquake catalogs allowing us to measure otherwise unobservable statistics. In this dissertation I will discuss five applications of physics-based simulations of natural hazards, utilizing an earthquake simulator called Virtual Quake. The first is an overview of computing earthquake probabilities from simulations, focusing on the California fault system. The second uses simulations to help guide satellite-based earthquake monitoring methods. The third presents a new friction model for Virtual Quake and describes how we tune simulations to match reality. The fourth describes the process of turning Virtual Quake into an open source research tool. This section then focuses on a resulting collaboration using Virtual Quake for a detailed

  12. Tsunami Hazard Preventing Based Land Use Planning Model Using GIS Techniques in Muang Krabi, Thailand

    Directory of Open Access Journals (Sweden)

    Abdul Salam Soomro

    2012-10-01

    Full Text Available The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region.

  13. Tsunami hazard preventing based land use planing model using GIS technique in Muang Krabi, Thailand

    International Nuclear Information System (INIS)

    Soormo, A.S.

    2012-01-01

    The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems) based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region. (author)

  14. Idiopathic Pulmonary Fibrosis: Gender-Age-Physiology Index Stage for Predicting Future Lung Function Decline.

    Science.gov (United States)

    Salisbury, Margaret L; Xia, Meng; Zhou, Yueren; Murray, Susan; Tayob, Nabihah; Brown, Kevin K; Wells, Athol U; Schmidt, Shelley L; Martinez, Fernando J; Flaherty, Kevin R

    2016-02-01

    Idiopathic pulmonary fibrosis is a progressive lung disease with variable course. The Gender-Age-Physiology (GAP) Index and staging system uses clinical variables to stage mortality risk. It is unknown whether clinical staging predicts future decline in pulmonary function. We assessed whether the GAP stage predicts future pulmonary function decline and whether interval pulmonary function change predicts mortality after accounting for stage. Patients with idiopathic pulmonary fibrosis (N = 657) were identified retrospectively at three tertiary referral centers, and baseline GAP stages were assessed. Mixed models were used to describe average trajectories of FVC and diffusing capacity of the lung for carbon monoxide (Dlco). Multivariable Cox proportional hazards models were used to assess whether declines in pulmonary function ≥ 10% in 6 months predict mortality after accounting for GAP stage. Over a 2-year period, GAP stage was not associated with differences in yearly lung function decline. After accounting for stage, a 10% decrease in FVC or Dlco over 6 months independently predicted death or transplantation (FVC hazard ratio, 1.37; Dlco hazard ratio, 1.30; both, P ≤ .03). Patients with GAP stage 2 with declining pulmonary function experienced a survival profile similar to patients with GAP stage 3, with 1-year event-free survival of 59.3% (95% CI, 49.4-67.8) vs 56.9% (95% CI, 42.2-69.1). Baseline GAP stage predicted death or lung transplantation but not the rate of future pulmonary function decline. After accounting for GAP stage, a decline of ≥ 10% over 6 months independently predicted death or lung transplantation. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  15. Stochastic modeling of a hazard detection and avoidance maneuver—The planetary landing case

    International Nuclear Information System (INIS)

    Witte, Lars

    2013-01-01

    Hazard Detection and Avoidance (HDA) functionalities, thus the ability to recognize and avoid potential hazardous terrain features, is regarded as an enabling technology for upcoming robotic planetary landing missions. In the forefront of any landing mission the landing site safety assessment is an important task in the systems and mission engineering process. To contribute to this task, this paper presents a mathematical framework to consider the HDA strategy and system constraints in this mission engineering aspect. Therefore the HDA maneuver is modeled as a stochastic decision process based on Markov chains to map an initial dispersion at an arrival gate to a new dispersion pattern affected by the divert decision-making and system constraints. The implications for an efficient numerical implementation are addressed. An example case study is given to demonstrate the implementation and use of the proposed scheme

  16. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  17. Flow-R, a model for susceptibility mapping of debris flows and other gravitational hazards at a regional scale

    Directory of Open Access Journals (Sweden)

    P. Horton

    2013-04-01

    Full Text Available The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM. The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time

  18. Model-free approach to the estimation of radiation hazards. I. Theory

    International Nuclear Information System (INIS)

    Zaider, M.; Brenner, D.J.

    1986-01-01

    The experience of the Japanese atomic bomb survivors constitutes to date the major data base for evaluating the effects of low doses of ionizing radiation on human populations. Although numerous analyses have been performed and published concerning this experience, it is clear that no consensus has emerged as to the conclusions that may be drawn to assist in setting realistic radiation protection guidelines. In part this is an inherent consequences of the rather limited amount of data available. In this paper the authors address an equally important problem; namely, the use of arbitrary parametric risk models which have little theoretical foundation, yet almost totally determine the final conclusions drawn. They propose the use of a model-free approach to the estimation of radiation hazards

  19. Scenario-based earthquake hazard and risk assessment for Baku (Azerbaijan

    Directory of Open Access Journals (Sweden)

    G. Babayev

    2010-12-01

    Full Text Available A rapid growth of population, intensive civil and industrial building, land and water instabilities (e.g. landslides, significant underground water level fluctuations, and the lack of public awareness regarding seismic hazard contribute to the increase of vulnerability of Baku (the capital city of the Republic of Azerbaijan to earthquakes. In this study, we assess an earthquake risk in the city determined as a convolution of seismic hazard (in terms of the surface peak ground acceleration, PGA, vulnerability (due to building construction fragility, population features, the gross domestic product per capita, and landslide's occurrence, and exposure of infrastructure and critical facilities. The earthquake risk assessment provides useful information to identify the factors influencing the risk. A deterministic seismic hazard for Baku is analysed for four earthquake scenarios: near, far, local, and extreme events. The seismic hazard models demonstrate the level of ground shaking in the city: PGA high values are predicted in the southern coastal and north-eastern parts of the city and in some parts of the downtown. The PGA attains its maximal values for the local and extreme earthquake scenarios. We show that the quality of buildings and the probability of their damage, the distribution of urban population, exposure, and the pattern of peak ground acceleration contribute to the seismic risk, meanwhile the vulnerability factors play a more prominent role for all earthquake scenarios. Our results can allow elaborating strategic countermeasure plans for the earthquake risk mitigation in the Baku city.

  20. Evaluation of MEDALUS model for desertification hazard zonation using GIS; study area: Iyzad Khast plain, Iran.

    Science.gov (United States)

    Farajzadeh, Manuchehr; Egbal, Mahbobeh Nik

    2007-08-15

    In this study, the MEDALUS model along with GIS mapping techniques are used to determine desertification hazards for a province of Iran to determine the desertification hazard. After creating a desertification database including 20 parameters, the first steps consisted of developing maps of four indices for the MEDALUS model including climate, soil, vegetation and land use were prepared. Since these parameters have mostly been presented for the Mediterranean region in the past, the next step included the addition of other indicators such as ground water and wind erosion. Then all of the layers weighted by environmental conditions present in the area were used (following the same MEDALUS framework) before a desertification map was prepared. The comparison of two maps based on the original and modified MEDALUS models indicates that the addition of more regionally-specific parameters into the model allows for a more accurate representation of desertification processes across the Iyzad Khast plain. The major factors affecting desertification in the area are climate, wind erosion and low land quality management, vegetation degradation and the salinization of soil and water resources.