Fatekurohman, Mohamat; Nurmala, Nita; Anggraeni, Dian
2018-04-01
Lungs are the most important organ, in the case of respiratory system. Problems related to disorder of the lungs are various, i.e. pneumonia, emphysema, tuberculosis and lung cancer. Comparing all those problems, lung cancer is the most harmful. Considering about that, the aim of this research applies survival analysis and factors affecting the endurance of the lung cancer patient using comparison of exact, Efron and Breslow parameter approach method on hazard ratio and stratified cox regression model. The data applied are based on the medical records of lung cancer patients in Jember Paru-paru hospital on 2016, east java, Indonesia. The factors affecting the endurance of the lung cancer patients can be classified into several criteria, i.e. sex, age, hemoglobin, leukocytes, erythrocytes, sedimentation rate of blood, therapy status, general condition, body weight. The result shows that exact method of stratified cox regression model is better than other. On the other hand, the endurance of the patients is affected by their age and the general conditions.
Directory of Open Access Journals (Sweden)
Chen Cao
2016-09-01
Full Text Available This study focused on producing flash flood hazard susceptibility maps (FFHSM using frequency ratio (FR and statistical index (SI models in the Xiqu Gully (XQG of Beijing, China. First, a total of 85 flash flood hazard locations (n = 85 were surveyed in the field and plotted using geographic information system (GIS software. Based on the flash flood hazard locations, a flood hazard inventory map was built. Seventy percent (n = 60 of the flooding hazard locations were randomly selected for building the models. The remaining 30% (n = 25 of the flooded hazard locations were used for validation. Considering that the XQG used to be a coal mining area, coalmine caves and subsidence caused by coal mining exist in this catchment, as well as many ground fissures. Thus, this study took the subsidence risk level into consideration for FFHSM. The ten conditioning parameters were elevation, slope, curvature, land use, geology, soil texture, subsidence risk area, stream power index (SPI, topographic wetness index (TWI, and short-term heavy rain. This study also tested different classification schemes for the values for each conditional parameter and checked their impacts on the results. The accuracy of the FFHSM was validated using area under the curve (AUC analysis. Classification accuracies were 86.61%, 83.35%, and 78.52% using frequency ratio (FR-natural breaks, statistical index (SI-natural breaks and FR-manual classification schemes, respectively. Associated prediction accuracies were 83.69%, 81.22%, and 74.23%, respectively. It was found that FR modeling using a natural breaks classification method was more appropriate for generating FFHSM for the Xiqu Gully.
Modeling lahar behavior and hazards
Manville, Vernon; Major, Jon J.; Fagents, Sarah A.
2013-01-01
Lahars are highly mobile mixtures of water and sediment of volcanic origin that are capable of traveling tens to > 100 km at speeds exceeding tens of km hr-1. Such flows are among the most serious ground-based hazards at many volcanoes because of their sudden onset, rapid advance rates, long runout distances, high energy, ability to transport large volumes of material, and tendency to flow along existing river channels where populations and infrastructure are commonly concentrated. They can grow in volume and peak discharge through erosion and incorporation of external sediment and/or water, inundate broad areas, and leave deposits many meters thick. Furthermore, lahars can recur for many years to decades after an initial volcanic eruption, as fresh pyroclastic material is eroded and redeposited during rainfall events, resulting in a spatially and temporally evolving hazard. Improving understanding of the behavior of these complex, gravitationally driven, multi-phase flows is key to mitigating the threat to communities at lahar-prone volcanoes. However, their complexity and evolving nature pose significant challenges to developing the models of flow behavior required for delineating their hazards and hazard zones.
Comparative Distributions of Hazard Modeling Analysis
Directory of Open Access Journals (Sweden)
Rana Abdul Wajid
2006-07-01
Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.
A balanced hazard ratio for risk group evaluation from survival data.
Branders, Samuel; Dupont, Pierre
2015-07-30
Common clinical studies assess the quality of prognostic factors, such as gene expression signatures, clinical variables or environmental factors, and cluster patients into various risk groups. Typical examples include cancer clinical trials where patients are clustered into high or low risk groups. Whenever applied to survival data analysis, such groups are intended to represent patients with similar survival odds and to select the most appropriate therapy accordingly. The relevance of such risk groups, and of the related prognostic factors, is typically assessed through the computation of a hazard ratio. We first stress three limitations of assessing risk groups through the hazard ratio: (1) it may promote the definition of arbitrarily unbalanced risk groups; (2) an apparently optimal group hazard ratio can be largely inconsistent with the p-value commonly associated to it; and (3) some marginal changes between risk group proportions may lead to highly different hazard ratio values. Those issues could lead to inappropriate comparisons between various prognostic factors. Next, we propose the balanced hazard ratio to solve those issues. This new performance metric keeps an intuitive interpretation and is as simple to compute. We also show how the balanced hazard ratio leads to a natural cut-off choice to define risk groups from continuous risk scores. The proposed methodology is validated through controlled experiments for which a prescribed cut-off value is defined by design. Further results are also reported on several cancer prognosis studies, and the proposed methodology could be applied more generally to assess the quality of any prognostic markers. Copyright © 2015 John Wiley & Sons, Ltd.
Hazard Warning: model misuse ahead
DEFF Research Database (Denmark)
Dickey-Collas, M.; Payne, Mark; Trenkel, V.
2014-01-01
The use of modelling approaches in marine science, and in particular fisheries science, is explored. We highlight that the choice of model used for an analysis should account for the question being posed or the context of the management problem. We examine a model-classification scheme based...... first step in assessing the utility of a model in the context of knowledge for decision-making in management...
A ¤flexible additive multiplicative hazard model
DEFF Research Database (Denmark)
Martinussen, T.; Scheike, T. H.
2002-01-01
Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect......Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect...
Proportional hazards models of infrastructure system recovery
International Nuclear Information System (INIS)
Barker, Kash; Baroud, Hiba
2014-01-01
As emphasis is being placed on a system's ability to withstand and to recover from a disruptive event, collectively referred to as dynamic resilience, there exists a need to quantify a system's ability to bounce back after a disruptive event. This work applies a statistical technique from biostatistics, the proportional hazards model, to describe (i) the instantaneous rate of recovery of an infrastructure system and (ii) the likelihood that recovery occurs prior to a given point in time. A major benefit of the proportional hazards model is its ability to describe a recovery event as a function of time as well as covariates describing the infrastructure system or disruptive event, among others, which can also vary with time. The proportional hazards approach is illustrated with a publicly available electric power outage data set
Geospatial subsidence hazard modelling at Sterkfontein Caves ...
African Journals Online (AJOL)
The geo-hazard subsidence model includes historic subsidence occurrances, terrain (water flow) and water accumulation. Water accumulating on the surface will percolate and reduce the strength of the soil mass, possibly inducing subsidence. Areas for further geotechnical investigation are identified, demonstrating that a ...
Corporate prediction models, ratios or regression analysis?
Bijnen, E.J.; Wijn, M.F.C.M.
1994-01-01
The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in
Modeling and Hazard Analysis Using STPA
Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka
2010-09-01
A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis
Hazard identification based on plant functional modelling
International Nuclear Information System (INIS)
Rasmussen, B.; Whetton, C.
1993-10-01
A major objective of the present work is to provide means for representing a process plant as a socio-technical system, so as to allow hazard identification at a high level. The method includes technical, human and organisational aspects and is intended to be used for plant level hazard identification so as to identify critical areas and the need for further analysis using existing methods. The first part of the method is the preparation of a plant functional model where a set of plant functions link together hardware, software, operations, work organisation and other safety related aspects of the plant. The basic principle of the functional modelling is that any aspect of the plant can be represented by an object (in the sense that this term is used in computer science) based upon an Intent (or goal); associated with each Intent are Methods, by which the Intent is realized, and Constraints, which limit the Intent. The Methods and Constraints can themselves be treated as objects and decomposed into lower-level Intents (hence the procedure is known as functional decomposition) so giving rise to a hierarchical, object-oriented structure. The plant level hazard identification is carried out on the plant functional model using the Concept Hazard Analysis method. In this, the user will be supported by checklists and keywords and the analysis is structured by pre-defined worksheets. The preparation of the plant functional model and the performance of the hazard identification can be carried out manually or with computer support. (au) (4 tabs., 10 ills., 7 refs.)
Austin, Peter C; Wagner, Philippe; Merlo, Juan
2017-03-15
Multilevel data occurs frequently in many research areas like health services research and epidemiology. A suitable way to analyze such data is through the use of multilevel regression models (MLRM). MLRM incorporate cluster-specific random effects which allow one to partition the total individual variance into between-cluster variation and between-individual variation. Statistically, MLRM account for the dependency of the data within clusters and provide correct estimates of uncertainty around regression coefficients. Substantively, the magnitude of the effect of clustering provides a measure of the General Contextual Effect (GCE). When outcomes are binary, the GCE can also be quantified by measures of heterogeneity like the Median Odds Ratio (MOR) calculated from a multilevel logistic regression model. Time-to-event outcomes within a multilevel structure occur commonly in epidemiological and medical research. However, the Median Hazard Ratio (MHR) that corresponds to the MOR in multilevel (i.e., 'frailty') Cox proportional hazards regression is rarely used. Analogously to the MOR, the MHR is the median relative change in the hazard of the occurrence of the outcome when comparing identical subjects from two randomly selected different clusters that are ordered by risk. We illustrate the application and interpretation of the MHR in a case study analyzing the hazard of mortality in patients hospitalized for acute myocardial infarction at hospitals in Ontario, Canada. We provide R code for computing the MHR. The MHR is a useful and intuitive measure for expressing cluster heterogeneity in the outcome and, thereby, estimating general contextual effects in multilevel survival analysis. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
The New Italian Seismic Hazard Model
Marzocchi, W.; Meletti, C.; Albarello, D.; D'Amico, V.; Luzi, L.; Martinelli, F.; Pace, B.; Pignone, M.; Rovida, A.; Visini, F.
2017-12-01
In 2015 the Seismic Hazard Center (Centro Pericolosità Sismica - CPS) of the National Institute of Geophysics and Volcanology was commissioned of coordinating the national scientific community with the aim to elaborate a new reference seismic hazard model, mainly finalized to the update of seismic code. The CPS designed a roadmap for releasing within three years a significantly renewed PSHA model, with regard both to the updated input elements and to the strategies to be followed. The main requirements of the model were discussed in meetings with the experts on earthquake engineering that then will participate to the revision of the building code. The activities were organized in 6 tasks: program coordination, input data, seismicity models, ground motion predictive equations (GMPEs), computation and rendering, testing. The input data task has been selecting the most updated information about seismicity (historical and instrumental), seismogenic faults, and deformation (both from seismicity and geodetic data). The seismicity models have been elaborating in terms of classic source areas, fault sources and gridded seismicity based on different approaches. The GMPEs task has selected the most recent models accounting for their tectonic suitability and forecasting performance. The testing phase has been planned to design statistical procedures to test with the available data the whole seismic hazard models, and single components such as the seismicity models and the GMPEs. In this talk we show some preliminary results, summarize the overall strategy for building the new Italian PSHA model, and discuss in detail important novelties that we put forward. Specifically, we adopt a new formal probabilistic framework to interpret the outcomes of the model and to test it meaningfully; this requires a proper definition and characterization of both aleatory variability and epistemic uncertainty that we accomplish through an ensemble modeling strategy. We use a weighting scheme
Modeling Compound Flood Hazards in Coastal Embayments
Moftakhari, H.; Schubert, J. E.; AghaKouchak, A.; Luke, A.; Matthew, R.; Sanders, B. F.
2017-12-01
Coastal cities around the world are built on lowland topography adjacent to coastal embayments and river estuaries, where multiple factors threaten increasing flood hazards (e.g. sea level rise and river flooding). Quantitative risk assessment is required for administration of flood insurance programs and the design of cost-effective flood risk reduction measures. This demands a characterization of extreme water levels such as 100 and 500 year return period events. Furthermore, hydrodynamic flood models are routinely used to characterize localized flood level intensities (i.e., local depth and velocity) based on boundary forcing sampled from extreme value distributions. For example, extreme flood discharges in the U.S. are estimated from measured flood peaks using the Log-Pearson Type III distribution. However, configuring hydrodynamic models for coastal embayments is challenging because of compound extreme flood events: events caused by a combination of extreme sea levels, extreme river discharges, and possibly other factors such as extreme waves and precipitation causing pluvial flooding in urban developments. Here, we present an approach for flood risk assessment that coordinates multivariate extreme analysis with hydrodynamic modeling of coastal embayments. First, we evaluate the significance of correlation structure between terrestrial freshwater inflow and oceanic variables; second, this correlation structure is described using copula functions in unit joint probability domain; and third, we choose a series of compound design scenarios for hydrodynamic modeling based on their occurrence likelihood. The design scenarios include the most likely compound event (with the highest joint probability density), preferred marginal scenario and reproduced time series of ensembles based on Monte Carlo sampling of bivariate hazard domain. The comparison between resulting extreme water dynamics under the compound hazard scenarios explained above provides an insight to the
Econometric models for predicting confusion crop ratios
Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)
1979-01-01
Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.
Quantitative occupational risk model: Single hazard
International Nuclear Information System (INIS)
Papazoglou, I.A.; Aneziris, O.N.; Bellamy, L.J.; Ale, B.J.M.; Oh, J.
2017-01-01
A model for the quantification of occupational risk of a worker exposed to a single hazard is presented. The model connects the working conditions and worker behaviour to the probability of an accident resulting into one of three types of consequence: recoverable injury, permanent injury and death. Working conditions and safety barriers in place to reduce the likelihood of an accident are included. Logical connections are modelled through an influence diagram. Quantification of the model is based on two sources of information: a) number of accidents observed over a period of time and b) assessment of exposure data of activities and working conditions over the same period of time and the same working population. Effectiveness of risk reducing measures affecting the working conditions, worker behaviour and/or safety barriers can be quantified through the effect of these measures on occupational risk. - Highlights: • Quantification of occupational risk from a single hazard. • Influence diagram connects working conditions, worker behaviour and safety barriers. • Necessary data include the number of accidents and the total exposure of worker • Effectiveness of risk reducing measures is quantified through the impact on the risk • An example illustrates the methodology.
Further Results on Dynamic Additive Hazard Rate Model
Directory of Open Access Journals (Sweden)
Zhengcheng Zhang
2014-01-01
Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.
Modelling the liquidity ratio as macroprudential instrument
Jan Willem van den End; Mark Kruidhof
2012-01-01
The Basel III Liquidity Coverage Ratio (LCR) is a microprudential instrument to strengthen the liquidity position of banks. However, if in extreme scenarios the LCR becomes a binding constraint, the interaction of bank behaviour with the regulatory rule can have negative externalities. We simulate the systemic implications of the LCR by a liquidity stress-testing model, which takes into account the impact of bank reactions on second round feedback effects. We show that a flexible approach of ...
COMPARISON of FUZZY-BASED MODELS in LANDSLIDE HAZARD MAPPING
Directory of Open Access Journals (Sweden)
N. Mijani
2017-09-01
Full Text Available Landslide is one of the main geomorphic processes which effects on the development of prospect in mountainous areas and causes disastrous accidents. Landslide is an event which has different uncertain criteria such as altitude, slope, aspect, land use, vegetation density, precipitation, distance from the river and distance from the road network. This research aims to compare and evaluate different fuzzy-based models including Fuzzy Analytic Hierarchy Process (Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR. The main contribution of this paper reveals to the comprehensive criteria causing landslide hazard considering their uncertainties and comparison of different fuzzy-based models. The quantify of evaluation process are calculated by Density Ratio (DR and Quality Sum (QS. The proposed methodology implemented in Sari, one of the city of Iran which has faced multiple landslide accidents in recent years due to the particular environmental conditions. The achieved results of accuracy assessment based on the quantifier strated that Fuzzy-AHP model has higher accuracy compared to other two models in landslide hazard zonation. Accuracy of zoning obtained from Fuzzy-AHP model is respectively 0.92 and 0.45 based on method Precision (P and QS indicators. Based on obtained landslide hazard maps, Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR respectively cover 13, 26 and 35 percent of the study area with a very high risk level. Based on these findings, fuzzy-AHP model has been selected as the most appropriate method of zoning landslide in the city of Sari and the Fuzzy-gamma method with a minor difference is in the second order.
Bibliography - Existing Guidance for External Hazard Modelling
International Nuclear Information System (INIS)
Decker, Kurt
2015-01-01
The bibliography of deliverable D21.1 includes existing international and national guidance documents and standards on external hazard assessment together with a selection of recent scientific papers, which are regarded to provide useful information on the state of the art of external event modelling. The literature database is subdivided into International Standards, National Standards, and Science Papers. The deliverable is treated as a 'living document' which is regularly updated as necessary during the lifetime of ASAMPSA-E. The current content of the database is about 140 papers. Most of the articles are available as full-text versions in PDF format. The deliverable is available as an EndNote X4 database and as text files. The database includes the following information: Reference, Key words, Abstract (if available), PDF file of the original paper (if available), Notes (comments by the ASAMPSA-E consortium if available) The database is stored at the ASAMPSA-E FTP server hosted by IRSN. PDF files of original papers are accessible through the EndNote software
Incident Duration Modeling Using Flexible Parametric Hazard-Based Models
Directory of Open Access Journals (Sweden)
Ruimin Li
2014-01-01
Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.
A Model for Generating Multi-hazard Scenarios
Lo Jacomo, A.; Han, D.; Champneys, A.
2017-12-01
Communities in mountain areas are often subject to risk from multiple hazards, such as earthquakes, landslides, and floods. Each hazard has its own different rate of onset, duration, and return period. Multiple hazards tend to complicate the combined risk due to their interactions. Prioritising interventions for minimising risk in this context is challenging. We developed a probabilistic multi-hazard model to help inform decision making in multi-hazard areas. The model is applied to a case study region in the Sichuan province in China, using information from satellite imagery and in-situ data. The model is not intended as a predictive model, but rather as a tool which takes stakeholder input and can be used to explore plausible hazard scenarios over time. By using a Monte Carlo framework and varrying uncertain parameters for each of the hazards, the model can be used to explore the effect of different mitigation interventions aimed at reducing the disaster risk within an uncertain hazard context.
A structure for models of hazardous materials with complex behavior
International Nuclear Information System (INIS)
Rodean, H.C.
1991-01-01
Most atmospheric dispersion models used to assess the environmental consequences of accidental releases of hazardous chemicals do not have the capability to simulate the pertinent chemical and physical processes associated with the release of the material and its mixing with the atmosphere. The purpose of this paper is to present a materials sub-model with the flexibility to simulate the chemical and physical behaviour of a variety of materials released into the atmosphere. The model, which is based on thermodynamic equilibrium, incorporates the ideal gas law, temperature-dependent vapor pressure equations, temperature-dependent dissociation reactions, and reactions with atmospheric water vapor. The model equations, written in terms of pressure ratios and dimensionless parameters, are used to construct equilibrium diagrams with temperature and the mass fraction of the material in the mixture as coordinates. The model's versatility is demonstrated by its application to the release of UF 6 and N 2 O 4 , two materials with very different physical and chemical properties. (author)
An optimization model for transportation of hazardous materials
International Nuclear Information System (INIS)
Seyed-Hosseini, M.; Kheirkhah, A. S.
2005-01-01
In this paper, the optimal routing problem for transportation of hazardous materials is studied. Routing for the purpose of reducing the risk of transportation of hazardous materials has been studied and formulated by many researcher and several routing models have been presented up to now. These models can be classified into the categories: the models for routing a single movement and the models for routing multiple movements. In this paper, according to the current rules and regulations of road transportations of hazardous materials in Iran, a routing problem is designed. In this problem, the routs for several independent movements are simultaneously determined. To examine the model, the problem the transportations of two different dangerous materials in the road network of Mazandaran province in the north of Iran is formulated and solved by applying Integer programming model
Automated economic analysis model for hazardous waste minimization
International Nuclear Information System (INIS)
Dharmavaram, S.; Mount, J.B.; Donahue, B.A.
1990-01-01
The US Army has established a policy of achieving a 50 percent reduction in hazardous waste generation by the end of 1992. To assist the Army in reaching this goal, the Environmental Division of the US Army Construction Engineering Research Laboratory (USACERL) designed the Economic Analysis Model for Hazardous Waste Minimization (EAHWM). The EAHWM was designed to allow the user to evaluate the life cycle costs for various techniques used in hazardous waste minimization and to compare them to the life cycle costs of current operating practices. The program was developed in C language on an IBM compatible PC and is consistent with other pertinent models for performing economic analyses. The potential hierarchical minimization categories used in EAHWM include source reduction, recovery and/or reuse, and treatment. Although treatment is no longer an acceptable minimization option, its use is widespread and has therefore been addressed in the model. The model allows for economic analysis for minimization of the Army's six most important hazardous waste streams. These include, solvents, paint stripping wastes, metal plating wastes, industrial waste-sludges, used oils, and batteries and battery electrolytes. The EAHWM also includes a general application which can be used to calculate and compare the life cycle costs for minimization alternatives of any waste stream, hazardous or non-hazardous. The EAHWM has been fully tested and implemented in more than 60 Army installations in the United States
Wang, Wei; Albert, Jeffrey M
2017-08-01
An important problem within the social, behavioral, and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model in this paper. We give precise definitions of the total effect, natural indirect effect, and natural direct effect in terms of the survival probability, hazard function, and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log cumulative hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson Heart Study data and conduct sensitivity analysis to assess the impact on the mediation effects inference when the no unmeasured mediator-outcome confounding assumption is violated.
Technology Learning Ratios in Global Energy Models
International Nuclear Information System (INIS)
Varela, M.
2001-01-01
The process of introduction of a new technology supposes that while its production and utilisation increases, also its operation improves and its investment costs and production decreases. The accumulation of experience and learning of a new technology increase in parallel with the increase of its market share. This process is represented by the technological learning curves and the energy sector is not detached from this process of substitution of old technologies by new ones. The present paper carries out a brief revision of the main energy models that include the technology dynamics (learning). The energy scenarios, developed by global energy models, assume that the characteristics of the technologies are variables with time. But this trend is incorporated in a exogenous way in these energy models, that is to say, it is only a time function. This practice is applied to the cost indicators of the technology such as the specific investment costs or to the efficiency of the energy technologies. In the last years, the new concept of endogenous technological learning has been integrated within these global energy models. This paper examines the concept of technological learning in global energy models. It also analyses the technological dynamics of the energy system including the endogenous modelling of the process of technological progress. Finally, it makes a comparison of several of the most used global energy models (MARKAL, MESSAGE and ERIS) and, more concretely, about the use these models make of the concept of technological learning. (Author) 17 refs
A Temporal Ratio Model of Memory
Brown, Gordon D. A.; Neath, Ian; Chater, Nick
2007-01-01
A model of memory retrieval is described. The model embodies four main claims: (a) temporal memory--traces of items are represented in memory partly in terms of their temporal distance from the present; (b) scale-similarity--similar mechanisms govern retrieval from memory over many different timescales; (c) local distinctiveness--performance on a…
A high-resolution global flood hazard model
Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.
2015-09-01
Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.
A conflict model for the international hazardous waste disposal dispute
International Nuclear Information System (INIS)
Hu Kaixian; Hipel, Keith W.; Fang, Liping
2009-01-01
A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.
A conflict model for the international hazardous waste disposal dispute.
Hu, Kaixian; Hipel, Keith W; Fang, Liping
2009-12-15
A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.
Integrate urban‐scale seismic hazard analyses with the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Luco, Nicolas; Frankel, Arthur; Petersen, Mark D.; Aagaard, Brad T.; Baltay, Annemarie S.; Blanpied, Michael; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.; Graves, Robert; Hartzell, Stephen; Rezaeian, Sanaz; Stephenson, William J.; Wald, David J.; Williams, Robert A.; Withers, Kyle
2018-01-01
For more than 20 yrs, damage patterns and instrumental recordings have highlighted the influence of the local 3D geologic structure on earthquake ground motions (e.g., M">M 6.7 Northridge, California, Gao et al., 1996; M">M 6.9 Kobe, Japan, Kawase, 1996; M">M 6.8 Nisqually, Washington, Frankel, Carver, and Williams, 2002). Although this and other local‐scale features are critical to improving seismic hazard forecasts, historically they have not been explicitly incorporated into the U.S. National Seismic Hazard Model (NSHM, national model and maps), primarily because the necessary basin maps and methodologies were not available at the national scale. Instead,...
Agent-based Modeling with MATSim for Hazards Evacuation Planning
Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.
2015-12-01
Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.
[Using log-binomial model for estimating the prevalence ratio].
Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue
2010-05-01
To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.
Developments in consequence modelling of accidental releases of hazardous materials
Boot, H.
2012-01-01
The modelling of consequences of releases of hazardous materials in the Netherlands has mainly been based on the “Yellow Book”. Although there is no updated version of this official publication, new insights have been developed during the last decades. This article will give an overview of new
The 2014 United States National Seismic Hazard Model
Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Mueller, Charles; Haller, Kathleen; Frankel, Arthur; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen; Boyd, Oliver; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nicolas; Wheeler, Russell; Williams, Robert; Olsen, Anna H.
2015-01-01
New seismic hazard maps have been developed for the conterminous United States using the latest data, models, and methods available for assessing earthquake hazard. The hazard models incorporate new information on earthquake rupture behavior observed in recent earthquakes; fault studies that use both geologic and geodetic strain rate data; earthquake catalogs through 2012 that include new assessments of locations and magnitudes; earthquake adaptive smoothing models that more fully account for the spatial clustering of earthquakes; and 22 ground motion models, some of which consider more than double the shaking data applied previously. Alternative input models account for larger earthquakes, more complicated ruptures, and more varied ground shaking estimates than assumed in earlier models. The ground motions, for levels applied in building codes, differ from the previous version by less than ±10% over 60% of the country, but can differ by ±50% in localized areas. The models are incorporated in insurance rates, risk assessments, and as input into the U.S. building code provisions for earthquake ground shaking.
A New Seismic Hazard Model for Mainland China
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.
2017-12-01
We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.
Lee, Saro; Park, Inhye
2013-09-30
Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. Copyright © 2013. Published by Elsevier Ltd.
Modeling of Marine Natural Hazards in the Lesser Antilles
Zahibo, Narcisse; Nikolkina, Irina; Pelinovsky, Efim
2010-05-01
The Caribbean Sea countries are often affected by various marine natural hazards: hurricanes and cyclones, tsunamis and flooding. The historical data of marine natural hazards for the Lesser Antilles and specially, for Guadeloupe are presented briefly. Numerical simulation of several historical tsunamis in the Caribbean Sea (1755 Lisbon trans-Atlantic tsunami, 1867 Virgin Island earthquake tsunami, 2003 Montserrat volcano tsunami) are performed within the framework of the nonlinear-shallow theory. Numerical results demonstrate the importance of the real bathymetry variability with respect to the direction of propagation of tsunami wave and its characteristics. The prognostic tsunami wave height distribution along the Caribbean Coast is computed using various forms of seismic and hydrodynamics sources. These results are used to estimate the far-field potential for tsunami hazards at coastal locations in the Caribbean Sea. The nonlinear shallow-water theory is also applied to model storm surges induced by tropical cyclones, in particular, cyclones "Lilli" in 2002 and "Dean" in 2007. Obtained results are compared with observed data. The numerical models have been tested against known analytical solutions of the nonlinear shallow-water wave equations. Obtained results are described in details in [1-7]. References [1] N. Zahibo and E. Pelinovsky, Natural Hazards and Earth System Sciences, 1, 221 (2001). [2] N. Zahibo, E. Pelinovsky, A. Yalciner, A. Kurkin, A. Koselkov and A. Zaitsev, Oceanologica Acta, 26, 609 (2003). [3] N. Zahibo, E. Pelinovsky, A. Kurkin and A. Kozelkov, Science Tsunami Hazards. 21, 202 (2003). [4] E. Pelinovsky, N. Zahibo, P. Dunkley, M. Edmonds, R. Herd, T. Talipova, A. Kozelkov and I. Nikolkina, Science of Tsunami Hazards, 22, 44 (2004). [5] N. Zahibo, E. Pelinovsky, E. Okal, A. Yalciner, C. Kharif, T. Talipova and A. Kozelkov, Science of Tsunami Hazards, 23, 25 (2005). [6] N. Zahibo, E. Pelinovsky, T. Talipova, A. Rabinovich, A. Kurkin and I
Modelling multi-hazard hurricane damages on an urbanized coast with a Bayesian Network approach
van Verseveld, H.C.W.; Van Dongeren, A. R.; Plant, Nathaniel G.; Jäger, W.S.; den Heijer, C.
2015-01-01
Hurricane flood impacts to residential buildings in coastal zones are caused by a number of hazards, such as inundation, overflow currents, erosion, and wave attack. However, traditional hurricane damage models typically make use of stage-damage functions, where the stage is related to flooding depth only. Moreover, these models are deterministic and do not consider the large amount of uncertainty associated with both the processes themselves and with the predictions. This uncertainty becomes increasingly important when multiple hazards (flooding, wave attack, erosion, etc.) are considered simultaneously. This paper focusses on establishing relationships between observed damage and multiple hazard indicators in order to make better probabilistic predictions. The concept consists of (1) determining Local Hazard Indicators (LHIs) from a hindcasted storm with use of a nearshore morphodynamic model, XBeach, and (2) coupling these LHIs and building characteristics to the observed damages. We chose a Bayesian Network approach in order to make this coupling and used the LHIs ‘Inundation depth’, ‘Flow velocity’, ‘Wave attack’, and ‘Scour depth’ to represent flooding, current, wave impacts, and erosion related hazards.The coupled hazard model was tested against four thousand damage observations from a case site at the Rockaway Peninsula, NY, that was impacted by Hurricane Sandy in late October, 2012. The model was able to accurately distinguish ‘Minor damage’ from all other outcomes 95% of the time and could distinguish areas that were affected by the storm, but not severely damaged, 68% of the time. For the most heavily damaged buildings (‘Major Damage’ and ‘Destroyed’), projections of the expected damage underestimated the observed damage. The model demonstrated that including multiple hazards doubled the prediction skill, with Log-Likelihood Ratio test (a measure of improved accuracy and reduction in uncertainty) scores between 0.02 and 0
Religiousness and hazardous alcohol use: a conditional indirect effects model.
Jankowski, Peter J; Hardy, Sam A; Zamboanga, Byron L; Ham, Lindsay S
2013-08-01
The current study examined a conditional indirect effects model of the association between religiousness and adolescents' hazardous alcohol use. In doing so, we responded to the need to include both mediators and moderators, and the need for theoretically informed models when examining religiousness and adolescents' alcohol use. The sample consisted of 383 adolescents, aged 15-18, who completed an online questionnaire. Results of structural equation modeling supported the proposed model. Religiousness was indirectly associated with hazardous alcohol use through both positive alcohol expectancy outcomes and negative alcohol expectancy valuations. Significant moderating effects for alcohol expectancy valuations on the association between alcohol expectancies and alcohol use were also found. The effects for alcohol expectancy valuations confirm valuations as a distinct construct to that of alcohol expectancy outcomes, and offer support for the protective role of internalized religiousness on adolescents' hazardous alcohol use as a function of expectancy valuations. Copyright © 2013 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Rockfall hazard analysis using LiDAR and spatial modeling
Lan, Hengxing; Martin, C. Derek; Zhou, Chenghu; Lim, Chang Ho
2010-05-01
Rockfalls have been significant geohazards along the Canadian Class 1 Railways (CN Rail and CP Rail) since their construction in the late 1800s. These rockfalls cause damage to infrastructure, interruption of business, and environmental impacts, and their occurrence varies both spatially and temporally. The proactive management of these rockfall hazards requires enabling technologies. This paper discusses a hazard assessment strategy for rockfalls along a section of a Canadian railway using LiDAR and spatial modeling. LiDAR provides accurate topographical information of the source area of rockfalls and along their paths. Spatial modeling was conducted using Rockfall Analyst, a three dimensional extension to GIS, to determine the characteristics of the rockfalls in terms of travel distance, velocity and energy. Historical rockfall records were used to calibrate the physical characteristics of the rockfall processes. The results based on a high-resolution digital elevation model from a LiDAR dataset were compared with those based on a coarse digital elevation model. A comprehensive methodology for rockfall hazard assessment is proposed which takes into account the characteristics of source areas, the physical processes of rockfalls and the spatial attribution of their frequency and energy.
Defaultable Game Options in a Hazard Process Model
Directory of Open Access Journals (Sweden)
Tomasz R. Bielecki
2009-01-01
Full Text Available The valuation and hedging of defaultable game options is studied in a hazard process model of credit risk. A convenient pricing formula with respect to a reference filteration is derived. A connection of arbitrage prices with a suitable notion of hedging is obtained. The main result shows that the arbitrage prices are the minimal superhedging prices with sigma martingale cost under a risk neutral measure.
Standardized binomial models for risk or prevalence ratios and differences.
Richardson, David B; Kinlaw, Alan C; MacLehose, Richard F; Cole, Stephen R
2015-10-01
Epidemiologists often analyse binary outcomes in cohort and cross-sectional studies using multivariable logistic regression models, yielding estimates of adjusted odds ratios. It is widely known that the odds ratio closely approximates the risk or prevalence ratio when the outcome is rare, and it does not do so when the outcome is common. Consequently, investigators may decide to directly estimate the risk or prevalence ratio using a log binomial regression model. We describe the use of a marginal structural binomial regression model to estimate standardized risk or prevalence ratios and differences. We illustrate the proposed approach using data from a cohort study of coronary heart disease status in Evans County, Georgia, USA. The approach reduces problems with model convergence typical of log binomial regression by shifting all explanatory variables except the exposures of primary interest from the linear predictor of the outcome regression model to a model for the standardization weights. The approach also facilitates evaluation of departures from additivity in the joint effects of two exposures. Epidemiologists should consider reporting standardized risk or prevalence ratios and differences in cohort and cross-sectional studies. These are readily-obtained using the SAS, Stata and R statistical software packages. The proposed approach estimates the exposure effect in the total population. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
Multivariate Models for Prediction of Human Skin Sensitization Hazard
Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole
2016-01-01
One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324
Bayes estimation of the general hazard rate model
International Nuclear Information System (INIS)
Sarhan, A.
1999-01-01
In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2
The additive hazards model with high-dimensional regressors
DEFF Research Database (Denmark)
Martinussen, Torben; Scheike, Thomas
2009-01-01
This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...
Spatial age-length key modelling using continuation ratio logits
DEFF Research Database (Denmark)
Berg, Casper W.; Kristensen, Kasper
2012-01-01
-called age-length key (ALK) is then used to obtain the age distribution. Regional differences in ALKs are not uncommon, but stratification is often problematic due to a small number of samples. Here, we combine generalized additive modelling with continuation ratio logits to model the probability of age...
International Nuclear Information System (INIS)
Kahia, S.; Brinkman, H.; Bareith, A.; Siklossy, T.; Vinot, T.; Mateescu, T.; Espargilliere, J.; Burgazzi, L.; Ivanov, I.; Bogdanov, D.; Groudev, P.; Ostapchuk, S.; Zhabin, O.; Stojka, T.; Alzbutas, R.; Kumar, M.; Nitoi, M.; Farcasiu, M.; Borysiewicz, M.; Kowal, K.; Potempski, S.
2016-01-01
The goal of this report is to provide guidance on practices to model man-made hazards (mainly external fires and explosions) and accidental aircraft crash hazards and implement them in extended Level 1 PSA. This report is a joint deliverable of work package 21 (WP21) and work package 22 (WP22). The general objective of WP21 is to provide guidance on all of the individual hazards selected at the first ASAMPSA-E End Users Workshop (May 2014, Uppsala, Sweden). The objective of WP22 is to provide the solutions for purposes of different parts of man-made hazards Level 1 PSA fulfilment. This guidance is focusing on man-made hazards, namely: external fires and explosions, and accidental aircraft crash hazards. Guidance developed refers to existing guidance whenever possible. The initial part of guidance (WP21 part) reflects current practices to assess the frequencies for each type of hazards or combination of hazards (including correlated hazards) as initiating event for PSAs. The sources and quality of hazard data, the elements of hazard assessment methodologies and relevant examples are discussed. Classification and criteria to properly assess hazard combinations as well as examples and methods for assessment of these combinations are included in this guidance. In appendixes additional material is presented with the examples of practical approaches to aircraft crash and man-made hazard. The following issues are addressed: 1) Hazard assessment methodologies, including issues related to hazard combinations. 2) Modelling equipment of safety related SSC, 3) HRA, 4) Emergency response, 5) Multi-unit issues. Recommendations and also limitations, gaps identified in the existing methodologies and a list of open issues are included. At all stages of this guidance and especially from an industrial end-user perspective, one must keep in mind that the development of man-made hazards probabilistic analysis must be conditioned to the ability to ultimately obtain a representative risk
A decision model for the risk management of hazardous processes
International Nuclear Information System (INIS)
Holmberg, J.E.
1997-03-01
A decision model for risk management of hazardous processes as an optimisation problem of a point process is formulated in the study. In the approach, the decisions made by the management are divided into three categories: (1) planned process lifetime, (2) selection of the design and, (3) operational decisions. These three controlling methods play quite different roles in the practical risk management, which is also reflected in our approach. The optimisation of the process lifetime is related to the licensing problem of the process. It provides a boundary condition for a feasible utility function that is used as the actual objective function, i.e., maximizing the process lifetime utility. By design modifications, the management can affect the inherent accident hazard rate of the process. This is usually a discrete optimisation task. The study particularly concentrates upon the optimisation of the operational strategies given a certain design and licensing time. This is done by a dynamic risk model (marked point process model) representing the stochastic process of events observable or unobservable to the decision maker. An optimal long term control variable guiding the selection of operational alternatives in short term problems is studied. The optimisation problem is solved by the stochastic quasi-gradient procedure. The approach is illustrated by a case study. (23 refs.)
Modeling emergency evacuation for major hazard industrial sites
International Nuclear Information System (INIS)
Georgiadou, Paraskevi S.; Papazoglou, Ioannis A.; Kiranoudis, Chris T.; Markatos, Nikolaos C.
2007-01-01
A model providing the temporal and spatial distribution of the population under evacuation around a major hazard facility is developed. A discrete state stochastic Markov process simulates the movement of the evacuees. The area around the hazardous facility is divided into nodes connected among themselves with links representing the road system of the area. Transition from node-to-node is simulated as a random process where the probability of transition depends on the dynamically changed states of the destination and origin nodes and on the link between them. Solution of the Markov process provides the expected distribution of the evacuees in the nodes of the area as a function of time. A Monte Carlo solution of the model provides in addition a sample of actual trajectories of the evacuees. This information coupled with an accident analysis which provides the spatial and temporal distribution of the extreme phenomenon following an accident, determines a sample of the actual doses received by the evacuees. Both the average dose and the actual distribution of doses are then used as measures in evaluating alternative emergency response strategies. It is shown that in some cases the estimation of the health consequences by the average dose might be either too conservative or too non-conservative relative to the one corresponding to the distribution of the received dose and hence not a suitable measure to evaluate alternative evacuation strategies
Representing the Past by Solid Modeling + Golden Ratio Analysis
Ding, Suining
2008-01-01
This paper describes the procedures of reconstructing ancient architecture using solid modeling with geometric analysis, especially the Golden Ratio analysis. In the past the recovery and reconstruction of ruins required bringing together fragments of evidence and vast amount of measurements from archaeological site. Although researchers and…
Opinion: The use of natural hazard modeling for decision making under uncertainty
David E. Calkin; Mike Mentis
2015-01-01
Decision making to mitigate the effects of natural hazards is a complex undertaking fraught with uncertainty. Models to describe risks associated with natural hazards have proliferated in recent years. Concurrently, there is a growing body of work focused on developing best practices for natural hazard modeling and to create structured evaluation criteria for complex...
Likelihood ratio sequential sampling models of recognition memory.
Osth, Adam F; Dennis, Simon; Heathcote, Andrew
2017-02-01
The mirror effect - a phenomenon whereby a manipulation produces opposite effects on hit and false alarm rates - is benchmark regularity of recognition memory. A likelihood ratio decision process, basing recognition on the relative likelihood that a stimulus is a target or a lure, naturally predicts the mirror effect, and so has been widely adopted in quantitative models of recognition memory. Glanzer, Hilford, and Maloney (2009) demonstrated that likelihood ratio models, assuming Gaussian memory strength, are also capable of explaining regularities observed in receiver-operating characteristics (ROCs), such as greater target than lure variance. Despite its central place in theorising about recognition memory, however, this class of models has not been tested using response time (RT) distributions. In this article, we develop a linear approximation to the likelihood ratio transformation, which we show predicts the same regularities as the exact transformation. This development enabled us to develop a tractable model of recognition-memory RT based on the diffusion decision model (DDM), with inputs (drift rates) provided by an approximate likelihood ratio transformation. We compared this "LR-DDM" to a standard DDM where all targets and lures receive their own drift rate parameters. Both were implemented as hierarchical Bayesian models and applied to four datasets. Model selection taking into account parsimony favored the LR-DDM, which requires fewer parameters than the standard DDM but still fits the data well. These results support log-likelihood based models as providing an elegant explanation of the regularities of recognition memory, not only in terms of choices made but also in terms of the times it takes to make them. Copyright © 2016 Elsevier Inc. All rights reserved.
Models for estimating the radiation hazards of uranium mines
International Nuclear Information System (INIS)
Wise, K.N.
1982-01-01
Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation or ingestion of uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined
Models for estimating the radiation hazards of uranium mines
International Nuclear Information System (INIS)
Wise, K.N.
1990-01-01
Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation of ingestion or uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined. 34 refs., 12 tabs., 9 figs
Functional form diagnostics for Cox's proportional hazards model.
León, Larry F; Tsai, Chih-Ling
2004-03-01
We propose a new type of residual and an easily computed functional form test for the Cox proportional hazards model. The proposed test is a modification of the omnibus test for testing the overall fit of a parametric regression model, developed by Stute, González Manteiga, and Presedo Quindimil (1998, Journal of the American Statistical Association93, 141-149), and is based on what we call censoring consistent residuals. In addition, we develop residual plots that can be used to identify the correct functional forms of covariates. We compare our test with the functional form test of Lin, Wei, and Ying (1993, Biometrika80, 557-572) in a simulation study. The practical application of the proposed residuals and functional form test is illustrated using both a simulated data set and a real data set.
VHub - Cyberinfrastructure for volcano eruption and hazards modeling and simulation
Valentine, G. A.; Jones, M. D.; Bursik, M. I.; Calder, E. S.; Gallo, S. M.; Connor, C.; Carn, S. A.; Rose, W. I.; Moore-Russo, D. A.; Renschler, C. S.; Pitman, B.; Sheridan, M. F.
2009-12-01
Volcanic risk is increasing as populations grow in active volcanic regions, and as national economies become increasingly intertwined. In addition to their significance to risk, volcanic eruption processes form a class of multiphase fluid dynamics with rich physics on many length and time scales. Risk significance, physics complexity, and the coupling of models to complex dynamic spatial datasets all demand the development of advanced computational techniques and interdisciplinary approaches to understand and forecast eruption dynamics. Innovative cyberinfrastructure is needed to enable global collaboration and novel scientific creativity, while simultaneously enabling computational thinking in real-world risk mitigation decisions - an environment where quality control, documentation, and traceability are key factors. Supported by NSF, we are developing a virtual organization, referred to as VHub, to address this need. Overarching goals of the VHub project are: Dissemination. Make advanced modeling and simulation capabilities and key data sets readily available to researchers, students, and practitioners around the world. Collaboration. Provide a mechanism for participants not only to be users but also co-developers of modeling capabilities, and contributors of experimental and observational data sets for use in modeling and simulation, in a collaborative environment that reaches far beyond local work groups. Comparison. Facilitate comparison between different models in order to provide the practitioners with guidance for choosing the "right" model, depending upon the intended use, and provide a platform for multi-model analysis of specific problems and incorporation into probabilistic assessments. Application. Greatly accelerate access and application of a wide range of modeling tools and related data sets to agencies around the world that are charged with hazard planning, mitigation, and response. Education. Provide resources that will promote the training of the
Espelt, Albert; Marí-Dell'Olmo, Marc; Penelo, Eva; Bosque-Prous, Marina
2016-06-14
To examine the differences between Prevalence Ratio (PR) and Odds Ratio (OR) in a cross-sectional study and to provide tools to calculate PR using two statistical packages widely used in substance use research (STATA and R). We used cross-sectional data from 41,263 participants of 16 European countries participating in the Survey on Health, Ageing and Retirement in Europe (SHARE). The dependent variable, hazardous drinking, was calculated using the Alcohol Use Disorders Identification Test - Consumption (AUDIT-C). The main independent variable was gender. Other variables used were: age, educational level and country of residence. PR of hazardous drinking in men with relation to women was estimated using Mantel-Haenszel method, log-binomial regression models and poisson regression models with robust variance. These estimations were compared to the OR calculated using logistic regression models. Prevalence of hazardous drinkers varied among countries. Generally, men have higher prevalence of hazardous drinking than women [PR=1.43 (1.38-1.47)]. Estimated PR was identical independently of the method and the statistical package used. However, OR overestimated PR, depending on the prevalence of hazardous drinking in the country. In cross-sectional studies, where comparisons between countries with differences in the prevalence of the disease or condition are made, it is advisable to use PR instead of OR.
Universal amplitude ratios in the 3D Ising model
International Nuclear Information System (INIS)
Caselle, M.; Hasenbusch, M.
1998-01-01
We present a high precision Monte Carlo study of various universal amplitude ratios of the three dimensional Ising spin model. Using state of the art simulation techniques we studied the model close to criticality in both phases. Great care was taken to control systematic errors due to finite size effects and correction to scaling terms. We obtain C + /C - =4.75(3), f +,2nd /f -,2nd =1.95(2) and u * =14.3(1). Our results are compatible with those obtained by field theoretic methods applied to the φ 4 theory and high and low temperature series expansions of the Ising model. (orig.)
Preliminary deformation model for National Seismic Hazard map of Indonesia
Energy Technology Data Exchange (ETDEWEB)
Meilano, Irwan; Gunawan, Endra; Sarsito, Dina; Prijatna, Kosasih; Abidin, Hasanuddin Z. [Geodesy Research Division, Faculty of Earth Science and Technology, Institute of Technology Bandung (Indonesia); Susilo,; Efendi, Joni [Agency for Geospatial Information (BIG) (Indonesia)
2015-04-24
Preliminary deformation model for the Indonesia’s National Seismic Hazard (NSH) map is constructed as the block rotation and strain accumulation function at the elastic half-space. Deformation due to rigid body motion is estimated by rotating six tectonic blocks in Indonesia. The interseismic deformation due to subduction is estimated by assuming coupling on subduction interface while deformation at active fault is calculated by assuming each of the fault‘s segment slips beneath a locking depth or in combination with creeping in a shallower part. This research shows that rigid body motion dominates the deformation pattern with magnitude more than 15 mm/year, except in the narrow area near subduction zones and active faults where significant deformation reach to 25 mm/year.
Optimization of maintenance policy using the proportional hazard model
Energy Technology Data Exchange (ETDEWEB)
Samrout, M. [Information Sciences and Technologies Institute, University of Technology of Troyes, 10000 Troyes (France)], E-mail: mohamad.el_samrout@utt.fr; Chatelet, E. [Information Sciences and Technologies Institute, University of Technology of Troyes, 10000 Troyes (France)], E-mail: chatelt@utt.fr; Kouta, R. [M3M Laboratory, University of Technology of Belfort Montbeliard (France); Chebbo, N. [Industrial Systems Laboratory, IUT, Lebanese University (Lebanon)
2009-01-15
The evolution of system reliability depends on its structure as well as on the evolution of its components reliability. The latter is a function of component age during a system's operating life. Component aging is strongly affected by maintenance activities performed on the system. In this work, we consider two categories of maintenance activities: corrective maintenance (CM) and preventive maintenance (PM). Maintenance actions are characterized by their ability to reduce this age. PM consists of actions applied on components while they are operating, whereas CM actions occur when the component breaks down. In this paper, we expound a new method to integrate the effect of CM while planning for the PM policy. The proportional hazard function was used as a modeling tool for that purpose. Interesting results were obtained when comparison between policies that take into consideration the CM effect and those that do not is established.
Modeling Exposure to Persistent Chemicals in Hazard and Risk Assessment
Energy Technology Data Exchange (ETDEWEB)
Cowan-Ellsberry, Christina E.; McLachlan, Michael S.; Arnot, Jon A.; MacLeod, Matthew; McKone, Thomas E.; Wania, Frank
2008-11-01
Fate and exposure modeling has not thus far been explicitly used in the risk profile documents prepared to evaluate significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of POP and PBT chemicals in the environment. The goal of this paper is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include: (1) Benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk. (2) Directly estimating the exposure of the environment, biota and humans to provide information to complement measurements, or where measurements are not available or are limited. (3) To identify the key processes and chemical and/or environmental parameters that determine the exposure; thereby allowing the effective prioritization of research or measurements to improve the risk profile. (4) Predicting future time trends including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and whether the assumptions and input data are relevant in the context of the application
Modeling exposure to persistent chemicals in hazard and risk assessment.
Cowan-Ellsberry, Christina E; McLachlan, Michael S; Arnot, Jon A; Macleod, Matthew; McKone, Thomas E; Wania, Frank
2009-10-01
Fate and exposure modeling has not, thus far, been explicitly used in the risk profile documents prepared for evaluating the significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of persistent organic pollutants (POP) and persistent, bioaccumulative, and toxic (PBT) chemicals in the environment. The goal of this publication is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include 1) benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk; 2) directly estimating the exposure of the environment, biota, and humans to provide information to complement measurements or where measurements are not available or are limited; 3) to identify the key processes and chemical or environmental parameters that determine the exposure, thereby allowing the effective prioritization of research or measurements to improve the risk profile; and 4) forecasting future time trends, including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and
Paprotny, D.; Morales Napoles, O.; Jonkman, S.N.
2017-01-01
Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood
A Model Suggestion to Predict Leverage Ratio for Construction Projects
Directory of Open Access Journals (Sweden)
Özlem Tüz
2013-12-01
Full Text Available Due to the nature, construction is an industry with high uncertainty and risk. Construction industry carries high leverage ratios. Firms with low equities work in big projects through progress payment system, but in this case, even a small negative in the planned cash flows constitute a major risk for the company.The use of leverage, with a small investment to achieve profit targets large-scale, high-profit, but also brings a high risk with it. Investors may lose all or the portion of the money. In this study, monitoring and measuring of the leverage ratio because of the displacement in cash inflows of construction projects which uses high leverage and low cash to do business in the sector is targeted. Cash need because of drifting the cash inflows may be seen due to the model. Work should be done in the early stages of the project with little capital but in the later stages, rapidly growing capital need arises.The values obtained from the model may be used to supply the capital held in the right time by anticipating the risks because of the delay in cashflow of construction projects which uses high leverage ratio.
Numerical Modelling of Extreme Natural Hazards in the Russian Seas
Arkhipkin, Victor; Dobrolyubov, Sergey; Korablina, Anastasia; Myslenkov, Stanislav; Surkova, Galina
2017-04-01
Storm surges and extreme waves are severe natural sea hazards. Due to the almost complete lack of natural observations of these phenomena in the Russian seas (Caspian, Black, Azov, Baltic, White, Barents, Okhotsk, Kara), especially about their formation, development and destruction, they have been studied using numerical simulation. To calculate the parameters of wind waves for the seas listed above, except the Barents Sea, the spectral model SWAN was applied. For the Barents and Kara seas we used WAVEWATCH III model. Formation and development of storm surges were studied using ADCIRC model. The input data for models - bottom topography, wind, atmospheric pressure and ice cover. In modeling of surges in the White and Barents seas tidal level fluctuations were used. They have been calculated from 16 harmonic constant obtained from global atlas tides FES2004. Wind, atmospheric pressure and ice cover was taken from the NCEP/NCAR reanalysis for the period from 1948 to 2010, and NCEP/CFSR reanalysis for the period from 1979 to 2015. In modeling we used both regular and unstructured grid. The wave climate of the Caspian, Black, Azov, Baltic and White seas was obtained. Also the extreme wave height possible once in 100 years has been calculated. The statistics of storm surges for the White, Barents and Azov Seas were evaluated. The contribution of wind and atmospheric pressure in the formation of surges was estimated. The technique of climatic forecast frequency of storm synoptic situations was developed and applied for every sea. The research was carried out with financial support of the RFBR (grant 16-08-00829).
Energy Technology Data Exchange (ETDEWEB)
Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.; Nozick, Linda Karen; Prince, Michael
2013-09-01
Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research & Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorists actions based on assumptions regarding their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.
Modelling Of Anticipated Damage Ratio On Breakwaters Using Fuzzy Logic
Mercan, D. E.; Yagci, O.; Kabdasli, S.
2003-04-01
In breakwater design the determination of armour unit weight is especially important in terms of the structure's life. In a typical experimental breakwater stability study, different wave series composed of different wave heights; wave period and wave steepness characteristics are applied in order to investigate performance the structure. Using a classical approach, a regression equation is generated for damage ratio as a function of characteristic wave height. The parameters wave period and wave steepness are not considered. In this study, differing from the classical approach using a fuzzy logic, a relationship between damage ratio as a function of mean wave period (T_m), wave steepness (H_s/L_m) and significant wave height (H_s) was further generated. The system's inputs were mean wave period (T_m), wave steepness (H_s/L_m) and significant wave height (H_s). For fuzzification all input variables were divided into three fuzzy subsets, their membership functions were defined using method developed by Mandani (Mandani, 1974) and the rules were written. While for defuzzification the centroid method was used. In order to calibrate and test the generated models an experimental study was conducted. The experiments were performed in a wave flume (24 m long, 1.0 m wide and 1.0 m high) using 20 different irregular wave series (P-M spectrum). Throughout the study, the water depth was 0.6 m and the breakwater cross-sectional slope was 1V/2H. In the armour layer, a type of artificial armour unit known as antifer cubes were used. The results of the established fuzzy logic model and regression equation model was compared with experimental data and it was determined that the established fuzzy logic model gave a more accurate prediction of the damage ratio on this type of breakwater. References Mandani, E.H., "Application of Fuzzy Algorithms for Control of Simple Dynamic Plant", Proc. IEE, vol. 121, no. 12, December 1974.
Methodology Using MELCOR Code to Model Proposed Hazard Scenario
Energy Technology Data Exchange (ETDEWEB)
Gavin Hawkley
2010-07-01
This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.
Krishna, Akhouri P.; Kumar, Santosh
2013-10-01
Landslide hazard assessments using computational models, such as artificial neural network (ANN) and frequency ratio (FR), were carried out covering one of the important mountain highways in the Central Himalaya of Indian Himalayan Region (IHR). Landslide influencing factors were either calculated or extracted from spatial databases including recent remote sensing data of LANDSAT TM, CARTOSAT digital elevation model (DEM) and Tropical Rainfall Measuring Mission (TRMM) satellite for rainfall data. ANN was implemented using the multi-layered feed forward architecture with different input, output and hidden layers. This model based on back propagation algorithm derived weights for all possible parameters of landslides and causative factors considered. The training sites for landslide prone and non-prone areas were identified and verified through details gathered from remote sensing and other sources. Frequency Ratio (FR) models are based on observed relationships between the distribution of landslides and each landslide related factor. FR model implementation proved useful for assessing the spatial relationships between landslide locations and factors contributing to its occurrence. Above computational models generated respective susceptibility maps of landslide hazard for the study area. This further allowed the simulation of landslide hazard maps on a medium scale using GIS platform and remote sensing data. Upon validation and accuracy checks, it was observed that both models produced good results with FR having some edge over ANN based mapping. Such statistical and functional models led to better understanding of relationships between the landslides and preparatory factors as well as ensuring lesser levels of subjectivity compared to qualitative approaches.
A mental models approach to exploring perceptions of hazardous processes
International Nuclear Information System (INIS)
Bostrom, A.H.H.
1990-01-01
Based on mental models theory, a decision-analytic methodology is developed to elicit and represent perceptions of hazardous processes. An application to indoor radon illustrates the methodology. Open-ended interviews were used to elicit non-experts' perceptions of indoor radon, with explicit prompts for knowledge about health effects, exposure processes, and mitigation. Subjects then sorted photographs into radon-related and unrelated piles, explaining their rationale aloud as they sorted. Subjects demonstrated a small body of correct but often unspecific knowledge about exposure and effects processes. Most did not mention radon-decay processes, and seemed to rely on general knowledge about gases, radioactivity, or pollution to make inferences about radon. Some held misconceptions about contamination and health effects resulting from exposure to radon. In two experiments, subjects reading brochures designed according to the author's guidelines outperformed subjects reading a brochure distributed by the EPA on a diagnostic test, and did at least as well on an independently designed quiz. In both experiments, subjects who read any one of the brochures had more complete and correct knowledge about indoor radon than subjects who did not, whose knowledge resembled the radon-interview subjects'
Evaluating the hazard from Siding Spring dust: Models and predictions
Christou, A.
2014-12-01
Long-period comet C/2013 A1 (Siding Spring) will pass at a distance of ~140 thousand km (9e-4 AU) - about a third of a lunar distance - from the centre of Mars, closer to this planet than any known comet has come to the Earth since records began. Closest approach is expected to occur at 18:30 UT on the 19th October. This provides an opportunity for a ``free'' flyby of a different type of comet than those investigated by spacecraft so far, including comet 67P/Churyumov-Gerasimenko currently under scrutiny by the Rosetta spacecraft. At the same time, the passage of the comet through Martian space will create the opportunity to study the reaction of the planet's upper atmosphere to a known natural perturbation. The flip-side of the coin is the risk to Mars-orbiting assets, both existing (NASA's Mars Odyssey & Mars Reconnaissance Orbiter and ESA's Mars Express) and in transit (NASA's MAVEN and ISRO's Mangalyaan) by high-speed cometary dust potentially impacting spacecraft surfaces. Much work has already gone into assessing this hazard and devising mitigating measures in the precious little warning time given to characterise this object until Mars encounter. In this presentation, we will provide an overview of how the meteoroid stream and comet coma dust impact models evolved since the comet's discovery and discuss lessons learned should similar circumstances arise in the future.
Modelling Inland Flood Events for Hazard Maps in Taiwan
Ghosh, S.; Nzerem, K.; Sassi, M.; Hilberts, A.; Assteerawatt, A.; Tillmanns, S.; Mathur, P.; Mitas, C.; Rafique, F.
2015-12-01
Taiwan experiences significant inland flooding, driven by torrential rainfall from plum rain storms and typhoons during summer and fall. From last 13 to 16 years data, 3,000 buildings were damaged by such floods annually with a loss US$0.41 billion (Water Resources Agency). This long, narrow island nation with mostly hilly/mountainous topography is located at tropical-subtropical zone with annual average typhoon-hit-frequency of 3-4 (Central Weather Bureau) and annual average precipitation of 2502mm (WRA) - 2.5 times of the world's average. Spatial and temporal distributions of countrywide precipitation are uneven, with very high local extreme rainfall intensities. Annual average precipitation is 3000-5000mm in the mountainous regions, 78% of it falls in May-October, and the 1-hour to 3-day maximum rainfall are about 85 to 93% of the world records (WRA). Rivers in Taiwan are short with small upstream areas and high runoff coefficients of watersheds. These rivers have the steepest slopes, the shortest response time with rapid flows, and the largest peak flows as well as specific flood peak discharge (WRA) in the world. RMS has recently developed a countrywide inland flood model for Taiwan, producing hazard return period maps at 1arcsec grid resolution. These can be the basis for evaluating and managing flood risk, its economic impacts, and insured flood losses. The model is initiated with sub-daily historical meteorological forcings and calibrated to daily discharge observations at about 50 river gauges over the period 2003-2013. Simulations of hydrologic processes, via rainfall-runoff and routing models, are subsequently performed based on a 10000 year set of stochastic forcing. The rainfall-runoff model is physically based continuous, semi-distributed model for catchment hydrology. The 1-D wave propagation hydraulic model considers catchment runoff in routing and describes large-scale transport processes along the river. It also accounts for reservoir storage
Conceptual geoinformation model of natural hazards risk assessment
Kulygin, Valerii
2016-04-01
Natural hazards are the major threat to safe interactions between nature and society. The assessment of the natural hazards impacts and their consequences is important in spatial planning and resource management. Today there is a challenge to advance our understanding of how socio-economical and climate changes will affect the frequency and magnitude of hydro-meteorological hazards and associated risks. However, the impacts from different types of natural hazards on various marine and coastal economic activities are not of the same type. In this study, the conceptual geomodel of risk assessment is presented to highlight the differentiation by the type of economic activities in extreme events risk assessment. The marine and coastal ecosystems are considered as the objects of management, on the one hand, and as the place of natural hazards' origin, on the other hand. One of the key elements in describing of such systems is the spatial characterization of their components. Assessment of ecosystem state is based on ecosystem indicators (indexes). They are used to identify the changes in time. The scenario approach is utilized to account for the spatio-temporal dynamics and uncertainty factors. Two types of scenarios are considered: scenarios of using ecosystem services by economic activities and scenarios of extreme events and related hazards. The reported study was funded by RFBR, according to the research project No. 16-35-60043 mol_a_dk.
Modeling of finite aspect ratio effects on current drive
International Nuclear Information System (INIS)
Wright, J.C.; Phillips, C.K.
1996-01-01
Most 2D RF modeling codes use a parameterization of current drive efficiencies to calculate fast wave driven currents. This parameterization assumes a uniform diffusion coefficient and requires a priori knowledge of the wave polarizations. These difficulties may be avoided by a direct calculation of the quasilinear diffusion coefficient from the Kennel-Englemann form with the field polarizations calculated by a full wave code. This eliminates the need to use the approximation inherent in the parameterization. Current profiles are then calculated using the adjoint formulation. This approach has been implemented in the FISIC code. The accuracy of the parameterization of the current drive efficiency, η, is judged by a comparison with a direct calculation: where χ is the adjoint function, ε is the kinetic energy, and rvec Γ is the quasilinear flux. It is shown that for large aspect ratio devices (ε → 0), the parameterization is nearly identical to the direct calculation. As the aspect ratio approaches unity, visible differences between the two calculations appear
International Nuclear Information System (INIS)
Paris, P.
1989-11-01
This report describes a model which may be used to derive hazardous waste concentration limits in order to prevent ground water pollution from a landfill disposal. First the leachate concentration limits are determined taking into account the attenuation capacity of the landfill-site as a whole; waste concentrations are then derived by an elution model which assumes a constant ratio between liquid-solid concentrations. In the example two types of landfill have been considered and in each case concentration limits have been calculated for some hazardous substances and compared with the corresponding regulatory limits. (author)
Modelling the costs of natural hazards in games
Bostenaru-Dan, M.
2012-04-01
City are looked for today, including a development at the University of Torino called SimTorino, which simulates the development of the city in the next 20 years. The connection to another games genre as video games, the board games, will be investigated, since there are games on construction and reconstruction of a cathedral and its tower and a bridge in an urban environment of the middle ages based on the two novels of Ken Follett, "Pillars of the Earth" and "World Without End" and also more recent games, such as "Urban Sprawl" or the Romanian game "Habitat", dealing with the man-made hazard of demolition. A review of these games will be provided based on first hand playing experience. In games like "World without End" or "Pillars of the Earth", just like in the recently popular games of Zynga on social networks, construction management is done through providing "building" an item out of stylised materials, such as "stone", "sand" or more specific ones as "nail". Such approach could be used also for retrofitting buildings for earthquakes, in the series of "upgrade", not just for extension as it is currently in games, and this is what our research is about. "World without End" includes a natural disaster not so analysed today but which was judged by the author as the worst of manhood: the Black Death. The Black Death has effects and costs as well, not only modelled through action cards, but also on the built environment, by buildings remaining empty. On the other hand, games such as "Habitat" rely on role playing, which has been recently recognised as a way to bring games theory to decision making through the so-called contribution of drama, a way to solve conflicts through balancing instead of weighting, and thus related to Analytic Hierarchy Process. The presentation aims to also give hints on how to design a game for the problem of earthquake retrofit, translating the aims of the actors in such a process into role playing. Games are also employed in teaching of urban
Likelihood ratio model for classification of forensic evidence
Energy Technology Data Exchange (ETDEWEB)
Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)
2009-05-29
One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this
Likelihood ratio model for classification of forensic evidence
International Nuclear Information System (INIS)
Zadora, G.; Neocleous, T.
2009-01-01
One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H 1 )/p(E|H 2 ). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI b ) and after (RI a ) the annealing process, in the form of dRI = log 10 |RI a - RI b |. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this model outperformed two other
Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios
Custer, Rocco; Nishijima, Kazuyoshi
2012-01-01
In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is ...
Constrained variability of modeled T:ET ratio across biomes
Fatichi, Simone; Pappas, Christoforos
2017-07-01
A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.
Expansion of Collisional Radiative Model for Helium line ratio spectroscopy
Cinquegrani, David; Cooper, Chris; Forest, Cary; Milhone, Jason; Munoz-Borges, Jorge; Schmitz, Oliver; Unterberg, Ezekial
2015-11-01
Helium line ratio spectroscopy is a powerful technique of active plasma edge spectroscopy. It enables reconstruction of plasma edge parameters like electron density and temperature by use of suitable Collisional Radiative Models (CRM). An established approach is successful at moderate plasma densities (~1018m-3 range) and temperature (30-300eV), taking recombination and charge exchange to be negligible. The goal of this work is to experimentally explore limitations of this approach to CRM. For basic validation the Madison Plasma Dynamo eXperiment (MPDX) will be used. MPDX offers a very uniform plasma and spherical symmetry at low temperature (5-20 eV) and low density (1016 -1017m-3) . Initial data from MPDX shows a deviation in CRM results when compared to Langmuir probe data. This discrepancy points to the importance of recombination effects. The validated model is applied to first time measurement of electron density and temperature in front of an ICRH antenna at the TEXTOR tokamak. These measurements are important to understand RF coupling and PMI physics at the antenna limiters. Work supported in part by start up funds of the Department of Engineering Physics at the UW - Madison, USA and NSF CAREER award PHY-1455210.
A Bimodal Hybrid Model for Time-Dependent Probabilistic Seismic Hazard Analysis
Yaghmaei-Sabegh, Saman; Shoaeifar, Nasser; Shoaeifar, Parva
2018-03-01
The evaluation of evidence provided by geological studies and historical catalogs indicates that in some seismic regions and faults, multiple large earthquakes occur in cluster. Then, the occurrences of large earthquakes confront with quiescence and only the small-to-moderate earthquakes take place. Clustering of large earthquakes is the most distinguishable departure from the assumption of constant hazard of random occurrence of earthquakes in conventional seismic hazard analysis. In the present study, a time-dependent recurrence model is proposed to consider a series of large earthquakes that occurs in clusters. The model is flexible enough to better reflect the quasi-periodic behavior of large earthquakes with long-term clustering, which can be used in time-dependent probabilistic seismic hazard analysis with engineering purposes. In this model, the time-dependent hazard results are estimated by a hazard function which comprises three parts. A decreasing hazard of last large earthquake cluster and an increasing hazard of the next large earthquake cluster, along with a constant hazard of random occurrence of small-to-moderate earthquakes. In the final part of the paper, the time-dependent seismic hazard of the New Madrid Seismic Zone at different time intervals has been calculated for illustrative purpose.
Energy Technology Data Exchange (ETDEWEB)
Varela, M.
2001-07-01
The process of introduction of a new technology supposes that while its production and utilisation increases, also its operation improves and its investment costs and production decreases. The accumulation of experience and learning of a new technology increase in parallel with the increase of its market share. This process is represented by the technological learning curves and the energy sector is not detached from this process of substitution of old technologies by new ones. The present paper carries out a brief revision of the main energy models that include the technology dynamics (learning). The energy scenarios, developed by global energy models, assume that the characteristics of the technologies are variables with time. But this tend is incorporated in a exogenous way in these energy models, that is to say, it is only a time function. This practice is applied to the cost indicators of the technology such as the specific investment costs or to the efficiency of the energy technologies. In the last years, the new concept of endogenous technological learning has been integrated within these global energy models. This paper examines the concept of technological learning in global energy models. It also analyses the technological dynamics of the energy systems including the endogenous modelling of the process of technological progress. Finally, it makes a comparison of several of the most used global energy models (MARKAL, MESSAGE and ERIS) and, more concretely, about the use these models make of the concept of technological learning. (Author) 17 refs.
Strauch, R. L.; Istanbulluoglu, E.
2017-12-01
We develop a landslide hazard modeling approach that integrates a data-driven statistical model and a probabilistic process-based shallow landslide model for mapping probability of landslide initiation, transport, and deposition at regional scales. The empirical model integrates the influence of seven site attribute (SA) classes: elevation, slope, curvature, aspect, land use-land cover, lithology, and topographic wetness index, on over 1,600 observed landslides using a frequency ratio (FR) approach. A susceptibility index is calculated by adding FRs for each SA on a grid-cell basis. Using landslide observations we relate susceptibility index to an empirically-derived probability of landslide impact. This probability is combined with results from a physically-based model to produce an integrated probabilistic map. Slope was key in landslide initiation while deposition was linked to lithology and elevation. Vegetation transition from forest to alpine vegetation and barren land cover with lower root cohesion leads to higher frequency of initiation. Aspect effects are likely linked to differences in root cohesion and moisture controlled by solar insulation and snow. We demonstrate the model in the North Cascades of Washington, USA and identify locations of high and low probability of landslide impacts that can be used by land managers in their design, planning, and maintenance.
Costa, Antonio
2016-04-01
Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.
International Nuclear Information System (INIS)
Kraus, N.N.; Slovic, P.
1988-01-01
Previous studies of risk perception have typically focused on the mean judgments of a group of people regarding the riskiness (or safety) of a diverse set of hazardous activities, substances, and technologies. This paper reports the results of two studies that take a different path. Study 1 investigated whether models within a single technological domain were similar to previous models based on group means and diverse hazards. Study 2 created a group taxonomy of perceived risk for only one technological domain, railroads, and examined whether the structure of that taxonomy corresponded with taxonomies derived from prior studies of diverse hazards. Results from Study 1 indicated that the importance of various risk characteristics in determining perceived risk differed across individuals and across hazards, but not so much as to invalidate the results of earlier studies based on group means and diverse hazards. In Study 2, the detailed analysis of railroad hazards produced a structure that had both important similarities to, and dissimilarities from, the structure obtained in prior research with diverse hazard domains. The data also indicated that railroad hazards are really quite diverse, with some approaching nuclear reactors in their perceived seriousness. These results suggest that information about the diversity of perceptions within a single domain of hazards could provide valuable input to risk-management decisions
Computer models used to support cleanup decision-making at hazardous and radioactive waste sites
Energy Technology Data Exchange (ETDEWEB)
Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.
1992-07-01
Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.
Computer models used to support cleanup decision-making at hazardous and radioactive waste sites
International Nuclear Information System (INIS)
Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.
1992-07-01
Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study
The 2014 update to the National Seismic Hazard Model in California
Powers, Peter; Field, Edward H.
2015-01-01
The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.
A double moral hazard model of organization design
Berkovitch, Elazar; Israel, Ronen; Spiegel, Yossi
2007-01-01
We develop a theory of organization design in which the firm's structure is chosen to mitigate moral hazard problems in the selection and the implementation of projects. For a given set of projects, the 'divisional structure' which gives each agent the full responsibility over a subset of projects is in general more efficient than the functional structure under which projects are implemented by teams of agents, each of whom specializes in one task. However, the ex post efficiency of the divis...
International Nuclear Information System (INIS)
Alzbutas, R.; Ostapchuk, S.; Borysiewicz, M.; Decker, K.; Kumar, Manorma; Haeggstroem, A.; Nitoi, M.; Groudev, P.; Parey, S.; Potempski, S.; Raimond, E.; Siklossy, T.
2016-01-01
The goal of this report is to provide guidance on practices to model Extreme Weather hazards and implement them in extended level 1 PSA. This report is a joint deliverable of work package 21 (WP21) and work package 22 (WP22). The general objective of WP21 is to provide guidance on all of the individual hazards selected at the End Users Workshop. This guidance is focusing on extreme weather hazards, namely: extreme wind, extreme temperature and snow pack. Other hazards, however, are considered in cases where they are correlated/ associated with the hazard under discussion. Guidance developed refers to existing guidance whenever possible. As it was recommended by end users this guidance covers questions of developing integrated and/or separated extreme weathers PSA models. (authors)
Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach
Tsai, Bi-Huei; Chang, Chih-Huei
2009-08-01
Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.
Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy
Directory of Open Access Journals (Sweden)
J. Blahut
2010-11-01
Full Text Available Debris flow hazard modelling at medium (regional scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal, and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy. The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R, developed at the University of Lausanne (Switzerland. An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise
Yassin, Mohamed F
2013-06-01
Due to heavy traffic emissions within an urban environment, air quality during the last decade becomes worse year by year and hazard to public health. In the present work, numerical modeling of flow and dispersion of gaseous emissions from vehicle exhaust in a street canyon were investigated under changes of the aspect ratio and wind direction. The three-dimensional flow and dispersion of gaseous pollutants were modeled using a computational fluid dynamics (CFD) model which was numerically solved using Reynolds-averaged Navier-Stokes (RANS) equations. The diffusion flow field in the atmospheric boundary layer within the street canyon was studied for different aspect ratios (W/H=1/2, 3/4, and 1) and wind directions (θ=90°, 112.5°, 135°, and 157.5°). The numerical models were validated against wind tunnel results to optimize the turbulence model. The numerical results agreed well with the wind tunnel results. The simulation demonstrated that the minimum concentration at the human respiration height within the street canyon was on the windward side for aspect ratios W/H=1/2 and 1 and wind directions θ=112.5°, 135°, and 157.5°. The pollutant concentration level decreases as the wind direction and aspect ratio increase. The wind velocity and turbulence intensity increase as the aspect ratio and wind direction increase.
Conceptual Development of a National Volcanic Hazard Model for New Zealand
Stirling, Mark; Bebbington, Mark; Brenna, Marco; Cronin, Shane; Christophersen, Annemarie; Deligne, Natalia; Hurst, Tony; Jolly, Art; Jolly, Gill; Kennedy, Ben; Kereszturi, Gabor; Lindsay, Jan; Neall, Vince; Procter, Jonathan; Rhoades, David; Scott, Brad; Shane, Phil; Smith, Ian; Smith, Richard; Wang, Ting; White, James D. L.; Wilson, Colin J. N.; Wilson, Tom
2017-06-01
We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g. short-term forecasting). The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.
Conceptual Development of a National Volcanic Hazard Model for New Zealand
Directory of Open Access Journals (Sweden)
Mark Stirling
2017-06-01
Full Text Available We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration, and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g., short-term forecasting. The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.
Modeling of seismic hazards for dynamic reliability analysis
International Nuclear Information System (INIS)
Mizutani, M.; Fukushima, S.; Akao, Y.; Katukura, H.
1993-01-01
This paper investigates the appropriate indices of seismic hazard curves (SHCs) for seismic reliability analysis. In the most seismic reliability analyses of structures, the seismic hazards are defined in the form of the SHCs of peak ground accelerations (PGAs). Usually PGAs play a significant role in characterizing ground motions. However, PGA is not always a suitable index of seismic motions. When random vibration theory developed in the frequency domain is employed to obtain statistics of responses, it is more convenient for the implementation of dynamic reliability analysis (DRA) to utilize an index which can be determined in the frequency domain. In this paper, we summarize relationships among the indices which characterize ground motions. The relationships between the indices and the magnitude M are arranged as well. In this consideration, duration time plays an important role in relating two distinct class, i.e. energy class and power class. Fourier and energy spectra are involved in the energy class, and power and response spectra and PGAs are involved in the power class. These relationships are also investigated by using ground motion records. Through these investigations, we have shown the efficiency of employing the total energy as an index of SHCs, which can be determined in the time and frequency domains and has less variance than the other indices. In addition, we have proposed the procedure of DRA based on total energy. (author)
Jump Model / Comparability Ratio Model — Joinpoint Help System 4.4.0.0
The Jump Model / Comparability Ratio Model in the Joinpoint software provides a direct estimation of trend data (e.g. cancer rates) where there is a systematic scale change, which causes a “jump” in the rates, but is assumed not to affect the underlying trend.
Snakes as hazards: modelling risk by chasing chimpanzees.
McGrew, William C
2015-04-01
Snakes are presumed to be hazards to primates, including humans, by the snake detection hypothesis (Isbell in J Hum Evol 51:1-35, 2006; Isbell, The fruit, the tree, and the serpent. Why we see so well, 2009). Quantitative, systematic data to test this idea are lacking for the behavioural ecology of living great apes and human foragers. An alternative proxy is snakes encountered by primatologists seeking, tracking, and observing wild chimpanzees. We present 4 years of such data from Mt. Assirik, Senegal. We encountered 14 species of snakes a total of 142 times. Almost two-thirds of encounters were with venomous snakes. Encounters occurred most often in forest and least often in grassland, and more often in the dry season. The hypothesis seems to be supported, if frequency of encounter reflects selective risk of morbidity or mortality.
Analyzing Right-Censored Length-Biased Data with Additive Hazards Model
Institute of Scientific and Technical Information of China (English)
Mu ZHAO; Cun-jie LIN; Yong ZHOU
2017-01-01
Length-biased data are often encountered in observational studies,when the survival times are left-truncated and right-censored and the truncation times follow a uniform distribution.In this article,we propose to analyze such data with the additive hazards model,which specifies that the hazard function is the sum of an arbitrary baseline hazard function and a regression function of covariates.We develop estimating equation approaches to estimate the regression parameters.The resultant estimators are shown to be consistent and asymptotically normal.Some simulation studies and a real data example are used to evaluate the finite sample properties of the proposed estimators.
Modelling Multi Hazard Mapping in Semarang City Using GIS-Fuzzy Method
Nugraha, A. L.; Awaluddin, M.; Sasmito, B.
2018-02-01
One important aspect of disaster mitigation planning is hazard mapping. Hazard mapping can provide spatial information on the distribution of locations that are threatened by disaster. Semarang City as the capital of Central Java Province is one of the cities with high natural disaster intensity. Frequent natural disasters Semarang city is tidal flood, floods, landslides, and droughts. Therefore, Semarang City needs spatial information by doing multi hazard mapping to support disaster mitigation planning in Semarang City. Multi Hazards map modelling can be derived from parameters such as slope maps, rainfall, land use, and soil types. This modelling is done by using GIS method with scoring and overlay technique. However, the accuracy of modelling would be better if the GIS method is combined with Fuzzy Logic techniques to provide a good classification in determining disaster threats. The Fuzzy-GIS method will build a multi hazards map of Semarang city can deliver results with good accuracy and with appropriate threat class spread so as to provide disaster information for disaster mitigation planning of Semarang city. from the multi-hazard modelling using GIS-Fuzzy can be known type of membership that has a good accuracy is the type of membership Gauss with RMSE of 0.404 the smallest of the other membership and VAF value of 72.909% of the largest of the other membership.
[Hazard evaluation modeling of particulate matters emitted by coal-fired boilers and case analysis].
Shi, Yan-Ting; Du, Qian; Gao, Jian-Min; Bian, Xin; Wang, Zhi-Pu; Dong, He-Ming; Han, Qiang; Cao, Yang
2014-02-01
In order to evaluate the hazard of PM2.5 emitted by various boilers, in this paper, segmentation of particulate matters with sizes of below 2. 5 microm was performed based on their formation mechanisms and hazard level to human beings and environment. Meanwhile, taking into account the mass concentration, number concentration, enrichment factor of Hg, and content of Hg element in different coal ashes, a comprehensive model aimed at evaluating hazard of PM2.5 emitted by coal-fired boilers was established in this paper. Finally, through utilizing filed experimental data of previous literatures, a case analysis of the evaluation model was conducted, and the concept of hazard reduction coefficient was proposed, which can be used to evaluate the performance of dust removers.
Traffic Incident Clearance Time and Arrival Time Prediction Based on Hazard Models
Directory of Open Access Journals (Sweden)
Yang beibei Ji
2014-01-01
Full Text Available Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an effective input for travel time prediction. In this paper, the hazard based prediction models are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.
Time-predictable model application in probabilistic seismic hazard analysis of faults in Taiwan
Directory of Open Access Journals (Sweden)
Yu-Wen Chang
2017-01-01
Full Text Available Given the probability distribution function relating to the recurrence interval and the occurrence time of the previous occurrence of a fault, a time-dependent model of a particular fault for seismic hazard assessment was developed that takes into account the active fault rupture cyclic characteristics during a particular lifetime up to the present time. The Gutenberg and Richter (1944 exponential frequency-magnitude relation uses to describe the earthquake recurrence rate for a regional source. It is a reference for developing a composite procedure modelled the occurrence rate for the large earthquake of a fault when the activity information is shortage. The time-dependent model was used to describe the fault characteristic behavior. The seismic hazards contribution from all sources, including both time-dependent and time-independent models, were then added together to obtain the annual total lifetime hazard curves. The effects of time-dependent and time-independent models of fault [e.g., Brownian passage time (BPT and Poisson, respectively] in hazard calculations are also discussed. The proposed fault model result shows that the seismic demands of near fault areas are lower than the current hazard estimation where the time-dependent model was used on those faults, particularly, the elapsed time since the last event of the faults (such as the Chelungpu fault are short.
A Model Suggestion to Predict Leverage Ratio for Construction Projects
Özlem Tüz; Şafak Ebesek
2013-01-01
Due to the nature, construction is an industry with high uncertainty and risk. Construction industry carries high leverage ratios. Firms with low equities work in big projects through progress payment system, but in this case, even a small negative in the planned cash flows constitute a major risk for the company.The use of leverage, with a small investment to achieve profit targets large-scale, high-profit, but also brings a high risk with it. Investors may lose all or the portion of th...
Teamwork tools and activities within the hazard component of the Global Earthquake Model
Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.
2013-05-01
The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.
Ground motion models used in the 2014 U.S. National Seismic Hazard Maps
Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.
2015-01-01
The National Seismic Hazard Maps (NSHMs) are an important component of seismic design regulations in the United States. This paper compares hazard using the new suite of ground motion models (GMMs) relative to hazard using the suite of GMMs applied in the previous version of the maps. The new source characterization models are used for both cases. A previous paper (Rezaeian et al. 2014) discussed the five NGA-West2 GMMs used for shallow crustal earthquakes in the Western United States (WUS), which are also summarized here. Our focus in this paper is on GMMs for earthquakes in stable continental regions in the Central and Eastern United States (CEUS), as well as subduction interface and deep intraslab earthquakes. We consider building code hazard levels for peak ground acceleration (PGA), 0.2-s, and 1.0-s spectral accelerations (SAs) on uniform firm-rock site conditions. The GMM modifications in the updated version of the maps created changes in hazard within 5% to 20% in WUS; decreases within 5% to 20% in CEUS; changes within 5% to 15% for subduction interface earthquakes; and changes involving decreases of up to 50% and increases of up to 30% for deep intraslab earthquakes for most U.S. sites. These modifications were combined with changes resulting from modifications in the source characterization models to obtain the new hazard maps.
Three multimedia models used at hazardous and radioactive waste sites
International Nuclear Information System (INIS)
Moskowitz, P.D.; Pardi, R.; Fthenakis, V.M.; Holtzman, S.; Sun, L.C.; Rambaugh, J.O.; Potter, S.
1996-02-01
Multimedia models are used commonly in the initial phases of the remediation process where technical interest is focused on determining the relative importance of various exposure pathways. This report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. This study focused on three specific models MEPAS Version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. These models evaluate the transport and fate of contaminants from source to receptor through more than a single pathway. The presence of radioactive and mixed wastes at a site poses special problems. Hence, in this report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted. This report begins with a brief introduction to the concept of multimedia modeling, followed by an overview of the three models. The remaining chapters present more technical discussions of the issues associated with each compartment and their direct application to the specific models. In these analyses, the following components are discussed: source term; air transport; ground water transport; overland flow, runoff, and surface water transport; food chain modeling; exposure assessment; dosimetry/risk assessment; uncertainty; default parameters. The report concludes with a description of evolving updates to the model; these descriptions were provided by the model developers
Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios
DEFF Research Database (Denmark)
Custer, Rocco; Nishijima, Kazuyoshi
In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature...... disaggregation model that considers the uncertainty in the disaggregation, taking basis in the scaled Dirichlet distribution. The proposed probabilistic disaggregation model is applied to a portfolio of residential buildings in the Canton Bern, Switzerland, subject to flood risk. Thereby, the model is verified...... are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is generally imperfect, uncertainty arises in disaggregation. This paper therefore proposes a probabilistic...
Modeling contractor and company employee behavior in high hazard operation
Lin, P.H.; Hanea, D.; Ale, B.J.M.
2013-01-01
The recent blow-out and subsequent environmental disaster in the Gulf of Mexico have highlighted a number of serious problems in scientific thinking about safety. Risk models have generally concentrated on technical failures, which are easier to model and for which there are more concrete data.
Modeling and Testing Landslide Hazard Using Decision Tree
Directory of Open Access Journals (Sweden)
Mutasem Sh. Alkhasawneh
2014-01-01
Full Text Available This paper proposes a decision tree model for specifying the importance of 21 factors causing the landslides in a wide area of Penang Island, Malaysia. These factors are vegetation cover, distance from the fault line, slope angle, cross curvature, slope aspect, distance from road, geology, diagonal length, longitude curvature, rugosity, plan curvature, elevation, rain perception, soil texture, surface area, distance from drainage, roughness, land cover, general curvature, tangent curvature, and profile curvature. Decision tree models are used for prediction, classification, and factors importance and are usually represented by an easy to interpret tree like structure. Four models were created using Chi-square Automatic Interaction Detector (CHAID, Exhaustive CHAID, Classification and Regression Tree (CRT, and Quick-Unbiased-Efficient Statistical Tree (QUEST. Twenty-one factors were extracted using digital elevation models (DEMs and then used as input variables for the models. A data set of 137570 samples was selected for each variable in the analysis, where 68786 samples represent landslides and 68786 samples represent no landslides. 10-fold cross-validation was employed for testing the models. The highest accuracy was achieved using Exhaustive CHAID (82.0% compared to CHAID (81.9%, CRT (75.6%, and QUEST (74.0% model. Across the four models, five factors were identified as most important factors which are slope angle, distance from drainage, surface area, slope aspect, and cross curvature.
A Thermoacoustic Model for High Aspect Ratio Nanostructures
Directory of Open Access Journals (Sweden)
Masoud S. Loeian
2016-09-01
Full Text Available In this paper, we have developed a new thermoacoustic model for predicting the resonance frequency and quality factors of one-dimensional (1D nanoresonators. Considering a nanoresonator as a fix-free Bernoulli-Euler cantilever, an analytical model has been developed to show the influence of material and geometrical properties of 1D nanoresonators on their mechanical response without any damping. Diameter and elastic modulus have a direct relationship and length has an inverse relationship on the strain energy and stress at the clamp end of the nanoresonator. A thermoacoustic multiphysics COMSOL model has been elaborated to simulate the frequency response of vibrating 1D nanoresonators in air. The results are an excellent match with experimental data from independently published literature reports, and the results of this model are consistent with the analytical model. Considering the air and thermal damping in the thermoacoustic model, the quality factor of a nanowire has been estimated and the results show that zinc oxide (ZnO and silver-gallium (Ag2Ga nanoresonators are potential candidates as nanoresonators, nanoactuators, and for scanning probe microscopy applications.
Modeling Wildfire Hazard in the Western Hindu Kush-Himalayas
Bylow, D.
2012-12-01
Wildfire regimes are a leading driver of global environmental change affecting a diverse array of global ecosystems. Particulates and aerosols produced by wildfires are a primary source of air pollution making the early detection and monitoring of wildfires crucial. The objectives of this study were to model regional wildfire potential and identify environmental, topological, and sociological factors that contribute to the ignition of wildfire events in the Western Hindu Kush-Himalayas of South Asia. The environmental, topological, and sociological factors were used to model regional wildfire potential through multi-criteria evaluation using a method of weighted linear combination. Moderate Resolution Imaging Spectroradiometer (MODIS) and geographic information systems (GIS) data were integrated to analyze regional wildfires and construct the model. Model validation was performed using a holdout cross validation method. The study produced a significant model of wildfire potential in the Western Hindu Kush-Himalayas.; Western Hindu Kush-Himalayas ; Western Hindu Kush-Himalayas Wildfire Potential
Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events
Dinitz, Laura B.; Taketa, Richard A.
2013-01-01
This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.
Measures to assess the prognostic ability of the stratified Cox proportional hazards model
DEFF Research Database (Denmark)
(Tybjaerg-Hansen, A.) The Fibrinogen Studies Collaboration.The Copenhagen City Heart Study; Tybjærg-Hansen, Anne
2009-01-01
Many measures have been proposed to summarize the prognostic ability of the Cox proportional hazards (CPH) survival model, although none is universally accepted for general use. By contrast, little work has been done to summarize the prognostic ability of the stratified CPH model; such measures...
Modelling of wander ratios, travel speeds and productivity of cable ...
African Journals Online (AJOL)
The study, however, found that both cable and grapple skidders were only hauling approximately 50% of their capacity and for that reason multiple regression models to predict potential production at full payload capacity were developed for the two skidder configurations. Multiple regression was also used to develop ...
Austin, Peter C
2018-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.
An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest
Suh, Jangwon; Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon
2017-01-01
In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or t...
Bayesian nonparametric estimation of hazard rate in monotone Aalen model
Czech Academy of Sciences Publication Activity Database
Timková, Jana
2014-01-01
Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf
1993-03-01
the HDA . The model will 89 explicitly account for initial dilution, aerosol evaporation, and entrainment for turbulent jets, which simplifies...D.N., Yohn, J.F., Koopman R.P. and Brown T.C., "Conduct of Anhydrous Hydrofluoric Acid Spill Experiments," Proc. Int. Cqnf. On Vapor Cloud Modeling
Roberts, Dar A.; Church, Richard; Ustin, Susan L.; Brass, James A. (Technical Monitor)
2001-01-01
Large urban wildfires throughout southern California have caused billions of dollars of damage and significant loss of life over the last few decades. Rapid urban growth along the wildland interface, high fuel loads and a potential increase in the frequency of large fires due to climatic change suggest that the problem will worsen in the future. Improved fire spread prediction and reduced uncertainty in assessing fire hazard would be significant, both economically and socially. Current problems in the modeling of fire spread include the role of plant community differences, spatial heterogeneity in fuels and spatio-temporal changes in fuels. In this research, we evaluated the potential of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data for providing improved maps of wildfire fuel properties. Analysis concentrated in two areas of Southern California, the Santa Monica Mountains and Santa Barbara Front Range. Wildfire fuel information can be divided into four basic categories: fuel type, fuel load (live green and woody biomass), fuel moisture and fuel condition (live vs senesced fuels). To map fuel type, AVIRIS data were used to map vegetation species using Multiple Endmember Spectral Mixture Analysis (MESMA) and Binary Decision Trees. Green live biomass and canopy moisture were mapped using AVIRIS through analysis of the 980 nm liquid water absorption feature and compared to alternate measures of moisture and field measurements. Woody biomass was mapped using L and P band cross polarimetric data acquired in 1998 and 1999. Fuel condition was mapped using spectral mixture analysis to map green vegetation (green leaves), nonphotosynthetic vegetation (NPV; stems, wood and litter), shade and soil. Summaries describing the potential of hyperspectral and SAR data for fuel mapping are provided by Roberts et al. and Dennison et al. To utilize remotely sensed data to assess fire hazard, fuel-type maps were translated
The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms
Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie
2009-01-01
The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with
Eolian Modeling System: Predicting Windblown Dust Hazards in Battlefield Environments
2011-05-03
trend (i.e., a straight line on log-log scales) given by R ∝ T–α, (1) where R is the accumulation rate, T is the time interval of accumulation, and α...Figure 5(A) for a representative set of model parameters. The straight line labeled by h represents a linear increase in epipedon thickness with time...Pelletier, Frequency-magnitude distribution of eolian transport and the geomorphically most-effective windstorm , submitted but not accepted to Geophysical
DEFF Research Database (Denmark)
Hagemann, Kit; Scholderer, Joachim
and experts understanding of benefits and risks associated with three Novel foods (a potato, rice and functional food ingredients) using a relatively new methodology for the study of risk perception called Mental models. Mental models focus on the way people conceptualise hazardous processes and allows...... researchers to pit a normative analysis (expert mental models) against a descriptive analysis (consumer mental models). Expert models were elicited by means of a three-wave Delphi procedure from altogether 24 international experts and consumers models from in-dept interviews with Danish consumers. The results...... revealed that consumers´ and experts' mental models differed in connection to scope. Experts focused on the types of hazards for which risk assessments can be conducted under current legal frameworks whereas consumers were concerned about issues that lay outside the scope of current legislation. Experts...
Evaluation and hydrological modelization in the natural hazard prevention
International Nuclear Information System (INIS)
Pla Sentis, Ildefonso
2011-01-01
Soil degradation affects negatively his functions as a base to produce food, to regulate the hydrological cycle and the environmental quality. All over the world soil degradation is increasing partly due to lacks or deficiencies in the evaluations of the processes and causes of this degradation on each specific situation. The processes of soil physical degradation are manifested through several problems as compaction, runoff, hydric and Eolic erosion, landslides with collateral effects in situ and in the distance, often with disastrous consequences as foods, landslides, sedimentations, droughts, etc. These processes are frequently associated to unfavorable changes into the hydrologic processes responsible of the water balance and soil hydric regimes, mainly derived to soil use changes and different management practices and climatic changes. The evaluation of these processes using simple simulation models; under several scenarios of climatic change, soil properties and land use and management; would allow to predict the occurrence of this disastrous processes and consequently to select and apply the appropriate practices of soil conservation to eliminate or reduce their effects. This simulation models require, as base, detailed climatic information and hydrologic soil properties data. Despite of the existence of methodologies and commercial equipment (each time more sophisticated and precise) to measure the different physical and hydrological soil properties related with degradation processes, most of them are only applicable under really specific or laboratory conditions. Often indirect methodologies are used, based on relations or empiric indexes without an adequate validation, that often lead to expensive mistakes on the evaluation of soil degradation processes and their effects on natural disasters. It could be preferred simple field methodologies, direct and adaptable to different soil types and climates and to the sample size and the spatial variability of the
Modelling human interactions in the assessment of man-made hazards
International Nuclear Information System (INIS)
Nitoi, M.; Farcasiu, M.; Apostol, M.
2016-01-01
The human reliability assessment tools are not currently capable to model adequately the human ability to adapt, to innovate and to manage under extreme situations. The paper presents the results obtained by ICN PSA team in the frame of FP7 Advanced Safety Assessment Methodologies: extended PSA (ASAMPSA_E) project regarding the investigation of conducting HRA in human-made hazards. The paper proposes to use a 4-steps methodology for the assessment of human interactions in the external events (Definition and modelling of human interactions; Quantification of human failure events; Recovery analysis; Review). The most relevant factors with respect to HRA for man-made hazards (response execution complexity; existence of procedures with respect to the scenario in question; time available for action; timing of cues; accessibility of equipment; harsh environmental conditions) are presented and discussed thoroughly. The challenges identified in relation to man-made hazards HRA are highlighted. (authors)
International Nuclear Information System (INIS)
Decker, K.; Hirata, K.; Groudev, P.
2016-01-01
The current report provides guidance for the assessment of seismo-tectonic hazards in level 1 and 2 PSA. The objective is to review existing guidance, identify methodological challenges, and to propose novel guidance on key issues. Guidance for the assessment of vibratory ground motion and fault capability comprises the following: - listings of data required for the hazard assessment and methods to estimate data quality and completeness; - in-depth discussion of key input parameters required for hazard models; - discussions on commonly applied hazard assessment methodologies; - references to recent advances of science and technology. Guidance on the assessment of correlated or coincident hazards comprises of chapters on: - screening of correlated hazards; - assessment of correlated hazards (natural and man-made); - assessment of coincident hazards. (authors)
Contribution of physical modelling to climate-driven landslide hazard mapping: an alpine test site
Vandromme, R.; Desramaut, N.; Baills, A.; Hohmann, A.; Grandjean, G.; Sedan, O.; Mallet, J. P.
2012-04-01
The aim of this work is to develop a methodology for integrating climate change scenarios into quantitative hazard assessment and especially their precipitation component. The effects of climate change will be different depending on both the location of the site and the type of landslide considered. Indeed, mass movements can be triggered by different factors. This paper describes a methodology to address this issue and shows an application on an alpine test site. Mechanical approaches represent a solution for quantitative landslide susceptibility and hazard modeling. However, as the quantity and the quality of data are generally very heterogeneous at a regional scale, it is necessary to take into account the uncertainty in the analysis. In this perspective, a new hazard modeling method is developed and integrated in a program named ALICE. This program integrates mechanical stability analysis through a GIS software taking into account data uncertainty. This method proposes a quantitative classification of landslide hazard and offers a useful tool to gain time and efficiency in hazard mapping. However, an expertise approach is still necessary to finalize the maps. Indeed it is the only way to take into account some influent factors in slope stability such as heterogeneity of the geological formations or effects of anthropic interventions. To go further, the alpine test site (Barcelonnette area, France) is being used to integrate climate change scenarios into ALICE program, and especially their precipitation component with the help of a hydrological model (GARDENIA) and the regional climate model REMO (Jacob, 2001). From a DEM, land-cover map, geology, geotechnical data and so forth the program classifies hazard zones depending on geotechnics and different hydrological contexts varying in time. This communication, realized within the framework of Safeland project, is supported by the European Commission under the 7th Framework Programme for Research and Technological
Flood hazard mapping of Palembang City by using 2D model
Farid, Mohammad; Marlina, Ayu; Kusuma, Muhammad Syahril Badri
2017-11-01
Palembang as the capital city of South Sumatera Province is one of the metropolitan cities in Indonesia that flooded almost every year. Flood in the city is highly related to Musi River Basin. Based on Indonesia National Agency of Disaster Management (BNPB), the level of flood hazard is high. Many natural factors caused flood in the city such as high intensity of rainfall, inadequate drainage capacity, and also backwater flow due to spring tide. Furthermore, anthropogenic factors such as population increase, land cover/use change, and garbage problem make flood problem become worse. The objective of this study is to develop flood hazard map of Palembang City by using two dimensional model. HEC-RAS 5.0 is used as modelling tool which is verified with field observation data. There are 21 sub catchments of Musi River Basin in the flood simulation. The level of flood hazard refers to Head Regulation of BNPB number 2 in 2012 regarding general guideline of disaster risk assessment. The result for 25 year return per iod of flood shows that with 112.47 km2 area of inundation, 14 sub catchments are categorized in high hazard level. It is expected that the hazard map can be used for risk assessment.
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
Pak Kin Wong; Hang Cheong Wong; Chi Man Vong; Tong Meng Iong; Ka In Wong; Xianghui Gao
2015-01-01
Effective air-ratio control is desirable to maintain the best engine performance. However, traditional air-ratio control assumes the lambda sensor located at the tail pipe works properly and relies strongly on the air-ratio feedback signal measured by the lambda sensor. When the sensor is warming up during cold start or under failure, the traditional air-ratio control no longer works. To address this issue, this paper utilizes an advanced modelling technique, kernel extreme learning machine (...
A simple analytical model for dynamics of time-varying target leverage ratios
Lo, C. F.; Hui, C. H.
2012-03-01
In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.
An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest.
Suh, Jangwon; Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon
2017-11-27
In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further.
An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest
Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon
2017-01-01
In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further. PMID:29186922
An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest
Directory of Open Access Journals (Sweden)
Jangwon Suh
2017-11-01
Full Text Available In this study, current geographic information system (GIS-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further.
Komjathy, A.; Yang, Y. M.; Meng, X.; Verkhoglyadova, O. P.; Mannucci, A. J.; Langley, R. B.
2015-12-01
Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface.We will show examples for early detection of natural hazards generated ionospheric signatures using ground-based and space-borne GPS receivers. We will also discuss recent results from the U.S. Real-time Earthquake Analysis for Disaster Mitigation Network (READI) exercises utilizing our algorithms. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage. It is also expected that ionospheric monitoring of TEC perturbations might become an integral part of existing natural hazards warning systems.
DEFF Research Database (Denmark)
Nielsen, Jan; Parner, Erik
2010-01-01
In this paper, we model multivariate time-to-event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right- and interval-censored data are considered. The suggested approach is applied on two types of family studies using the gamma...
Independent screening for single-index hazard rate models with ultrahigh dimensional features
DEFF Research Database (Denmark)
Gorst-Rasmussen, Anders; Scheike, Thomas
2013-01-01
can be viewed as the natural survival equivalent of correlation screening. We state conditions under which the method admits the sure screening property within a class of single-index hazard rate models with ultrahigh dimensional features and describe the generally detrimental effect of censoring...
ASCHFLOW - A dynamic landslide run-out model for medium scale hazard analysis
Czech Academy of Sciences Publication Activity Database
Quan Luna, B.; Blahůt, Jan; van Asch, T.W.J.; van Westen, C.J.; Kappes, M.
2016-01-01
Roč. 3, 12 December (2016), č. článku 29. E-ISSN 2197-8670 Institutional support: RVO:67985891 Keywords : landslides * run-out models * medium scale hazard analysis * quantitative risk assessment Subject RIV: DE - Earth Magnetism, Geodesy, Geography
An estimating equation for parametric shared frailty models with marginal additive hazards
DEFF Research Database (Denmark)
Pipper, Christian Bressen; Martinussen, Torben
2004-01-01
Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In some contexts, however, it may be more reasonable...
An advanced model for spreading and evaporation of accidentally released hazardous liquids on land
Trijssenaar-Buhre, I.J.M.; Sterkenburg, R.P.; Wijnant-Timmerman, S.I.
2009-01-01
Pool evaporation modelling is an important element in consequence assessment of accidentally released hazardous liquids. The evaporation rate determines the amount of toxic or flammable gas released into the atmosphere and is an important factor for the size of a pool fire. In this paper a
An advanced model for spreading and evaporation of accidentally released hazardous liquids on land
Trijssenaar-Buhre, I.J.M.; Wijnant-Timmerman, S.L.
2008-01-01
Pool evaporation modelling is an important element in consequence assessment of accidentally released hazardous liquids. The evaporation rate determines the amount of toxic or flammable gas released into the atmosphere and is an important factor for the size of a pool fire. In this paper a
2015-04-01
HPD model. In an article on measuring HPD attenuation, Berger (1986) points out that Real Ear Attenuation at Threshold (REAT) tests are...men. Audiology . 1991;30:345–356. Fedele P, Binseel M, Kalb J, Price GR. Using the auditory hazard assessment algorithm for humans (AHAAH) with
Combining computational models for landslide hazard assessment of Guantánamo province, Cuba
Castellanos Abella, E.A.
2008-01-01
As part of the Cuban system for landslide disaster management, a methodology was developed for regional scale landslide hazard assessment, which is a combination of different models. The method was applied in Guantánamo province at 1:100 000 scale. The analysis started with an extensive aerial
Global river flood hazard maps: hydraulic modelling methods and appropriate uses
Townend, Samuel; Smith, Helen; Molloy, James
2014-05-01
Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some
A distribution ratio model of strontium by crown ether extraction from simulated HLLW
International Nuclear Information System (INIS)
Chen Jing; Wang Qiuping; Wang Jianchen; Song Chongli
1995-01-01
An experiential distribution ratio model for strontium extraction by dicyclohexano-18-crown-6-n-octanol from simulated high-level waste is established. The experimental points for the model are designed by experimental homogeneous-design method. The regression of distribution ratio model of strontium is carried out by the complex-optimization method. The model is verified with experimental distribution ratio data in different extraction conditions. The results show that the relative deviations are within +-10% and the mean relative divination is 4.4% between the calculated data and the experimental ones. The experiential model together with an iteration program can be used for the strontium extraction process calculation
Building a risk-targeted regional seismic hazard model for South-East Asia
Woessner, J.; Nyst, M.; Seyhan, E.
2015-12-01
The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.
Rich dynamics of a food chain model with ratio-dependent type III ...
African Journals Online (AJOL)
Rich dynamics of a food chain model with ratio-dependent type III functional responses. ... Stability analysis of model is carried out by using usual theory of ordinary ... that Hopf bifurcation may also occur when delay passes its critical value.
Directory of Open Access Journals (Sweden)
Elias .
2011-03-01
Full Text Available The case study was conducted in the area of Acacia mangium plantation at BKPH Parung Panjang, KPH Bogor. The objective of the study was to formulate equation models of tree root carbon mass and root to shoot carbon mass ratio of the plantation. It was found that carbon content in the parts of tree biomass (stems, branches, twigs, leaves, and roots was different, in which the highest and the lowest carbon content was in the main stem of the tree and in the leaves, respectively. The main stem and leaves of tree accounted for 70% of tree biomass. The root-shoot ratio of root biomass to tree biomass above the ground and the root-shoot ratio of root biomass to main stem biomass was 0.1443 and 0.25771, respectively, in which 75% of tree carbon mass was in the main stem and roots of tree. It was also found that the root-shoot ratio of root carbon mass to tree carbon mass above the ground and the root-shoot ratio of root carbon mass to tree main stem carbon mass was 0.1442 and 0.2034, respectively. All allometric equation models of tree root carbon mass of A. mangium have a high goodness-of-fit as indicated by its high adjusted R2.Keywords: Acacia mangium, allometric, root-shoot ratio, biomass, carbon mass
International Nuclear Information System (INIS)
Rebour, V.; Georgescu, G.; Leteinturier, D.; Raimond, E.; La Rovere, S.; Bernadara, P.; Vasseur, D.; Brinkman, H.; Groudev, P.; Ivanov, I.; Turschmann, M.; Sperbeck, S.; Potempski, S.; Hirata, K.; Kumar, Manorma
2016-01-01
This report provides a review of existing practices to model and implement external flooding hazards in existing level 1 PSA. The objective is to identify good practices on the modelling of initiating events (internal and external hazards) with a perspective of development of extended PSA and implementation of external events modelling in extended L1 PSA, its limitations/difficulties as far as possible. The views presented in this report are based on the ASAMPSA-E partners' experience and available publications. The report includes discussions on the following issues: - how to structure a L1 PSA for external flooding events, - information needed from geosciences in terms of hazards modelling and to build relevant modelling for PSA, - how to define and model the impact of each flooding event on SSCs with distinction between the flooding protective structures and devices and the effect of protection failures on other SSCs, - how to identify and model the common cause failures in one reactor or between several reactors, - how to apply HRA methodology for external flooding events, - how to credit additional emergency response (post-Fukushima measures like mobile equipment), - how to address the specific issues of L2 PSA, - how to perform and present risk quantification. (authors)
Tornado hazard model with the variation effects of tornado intensity along the path length
International Nuclear Information System (INIS)
Hirakuchi, Hiromaru; Nohara, Daisuke; Sugimoto, Soichiro; Eguchi, Yuzuru; Hattori, Yasuo
2015-01-01
Most of Japanese tornados have been reported near the coast line, where all of Japanese nuclear power plants are located. It is necessary for Japanese electric power companies to assess tornado risks on the plants according to a new regulation in 2013. The new regulatory guide exemplifies a tornado hazard model, which cannot consider the variation of tornado intensity along the path length and consequently produces conservative risk estimates. The guide also recommends the long narrow strip area along the coast line with the width of 5-10 km as a region of interest, although the model tends to estimate inadequate wind speeds due to the limit of application. The purpose of this study is to propose a new tornado hazard model which can be apply to the long narrow strip area. The new model can also consider the variation of tornado intensity along the path length and across the path width. (author)
Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.
Jain, Ram B
2016-08-01
Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.
A Mathematical Model for the Industrial Hazardous Waste Location-Routing Problem
Directory of Open Access Journals (Sweden)
Omid Boyer
2013-01-01
Full Text Available Technology progress is a cause of industrial hazardous wastes increasing in the whole world . Management of hazardous waste is a significant issue due to the imposed risk on environment and human life. This risk can be a result of location of undesirable facilities and also routing hazardous waste. In this paper a biobjective mixed integer programing model for location-routing industrial hazardous waste with two objectives is developed. First objective is total cost minimization including transportation cost, operation cost, initial investment cost, and cost saving from selling recycled waste. Second objective is minimization of transportation risk. Risk of population exposure within bandwidth along route is used to measure transportation risk. This model can help decision makers to locate treatment, recycling, and disposal centers simultaneously and also to route waste between these facilities considering risk and cost criteria. The results of the solved problem prove conflict between two objectives. Hence, it is possible to decrease the cost value by marginally increasing the transportation risk value and vice versa. A weighted sum method is utilized to combine two objectives function into one objective function. To solve the problem GAMS software with CPLEX solver is used. The problem is applied in Markazi province in Iran.
Time-aggregation effects on the baseline of continuous-time and discrete-time hazard models
ter Hofstede, F.; Wedel, M.
In this study we reinvestigate the effect of time-aggregation for discrete- and continuous-time hazard models. We reanalyze the results of a previous Monte Carlo study by ter Hofstede and Wedel (1998), in which the effects of time-aggregation on the parameter estimates of hazard models were
An Uncertain Wage Contract Model with Adverse Selection and Moral Hazard
Directory of Open Access Journals (Sweden)
Xiulan Wang
2014-01-01
it can be characterized as an uncertain variable. Moreover, the employee's effort is unobservable to the employer, and the employee can select her effort level to maximize her utility. Thus, an uncertain wage contract model with adverse selection and moral hazard is established to maximize the employer's expected profit. And the model analysis mainly focuses on the equivalent form of the proposed wage contract model and the optimal solution to this form. The optimal solution indicates that both the employee's effort level and the wage increase with the employee's ability. Lastly, a numerical example is given to illustrate the effectiveness of the proposed model.
Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals
DEFF Research Database (Denmark)
Li, Jianing; Scheike, Thomas; Zhang, Mei Jie
2015-01-01
Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter...... estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums...
DEFF Research Database (Denmark)
Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen
2014-01-01
We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...
Pal, Parimal; Das, Pallabi; Chakrabortty, Sankha; Thakura, Ritwik
2016-11-01
Dynamic modelling and simulation of a nanofiltration-forward osmosis integrated complete system was done along with economic evaluation to pave the way for scale up of such a system for treating hazardous pharmaceutical wastes. The system operated in a closed loop not only protects surface water from the onslaught of hazardous industrial wastewater but also saves on cost of fresh water by turning wastewater recyclable at affordable price. The success of dynamic modelling in capturing the relevant transport phenomena is well reflected in high overall correlation coefficient value (R 2 > 0.98), low relative error (osmosis loop at a reasonably high flux of 56-58 l per square meter per hour.
van der Net, Jeroen B.; Janssens, A. Cecile J. W.; Eijkemans, Marinus J. C.; Kastelein, John J. P.; Sijbrands, Eric J. G.; Steyerberg, Ewout W.
2008-01-01
Cross-sectional genetic association studies can be analyzed using Cox proportional hazards models with age as time scale, if age at onset of disease is known for the cases and age at data collection is known for the controls. We assessed to what degree and under what conditions Cox proportional
Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
Linear non-threshold (LNT) radiation hazards model and its evaluation
International Nuclear Information System (INIS)
Min Rui
2011-01-01
In order to introduce linear non-threshold (LNT) model used in study on the dose effect of radiation hazards and to evaluate its application, the analysis of comprehensive literatures was made. The results show that LNT model is more suitable to describe the biological effects in accuracy for high dose than that for low dose. Repairable-conditionally repairable model of cell radiation effects can be well taken into account on cell survival curve in the all conditions of high, medium and low absorbed dose range. There are still many uncertainties in assessment model of effective dose of internal radiation based on the LNT assumptions and individual mean organ equivalent, and it is necessary to establish gender-specific voxel human model, taking gender differences into account. From above, the advantages and disadvantages of various models coexist. Before the setting of the new theory and new model, LNT model is still the most scientific attitude. (author)
Chiral Lagrangian calculation of nucleon branching ratios in the supersymmetric SU(5) model
International Nuclear Information System (INIS)
Chadha, S.; Daniel, M.
1983-12-01
The branching ratios are calculated for the two body nucleon decay modes involving pseudoscalars in the minimal SU(5) supersymmetric model with three generations using the techniques of chiral dynamics. (author)
Rey, Julien; Beauval, Céline; Douglas, John
2018-02-01
Probabilistic seismic hazard assessments are the basis of modern seismic design codes. To test fully a seismic hazard curve at the return periods of interest for engineering would require many thousands of years' worth of ground-motion recordings. Because strong-motion networks are often only a few decades old (e.g. in mainland France the first accelerometric network dates from the mid-1990s), data from such sensors can be used to test hazard estimates only at very short return periods. In this article, several hundreds of years of macroseismic intensity observations for mainland France are interpolated using a robust kriging-with-a-trend technique to establish the earthquake history of every French mainland municipality. At 24 selected cities representative of the French seismic context, the number of exceedances of intensities IV, V and VI is determined over time windows considered complete. After converting these intensities to peak ground accelerations using the global conversion equation of Caprio et al. (Ground motion to intensity conversion equations (GMICEs): a global relationship and evaluation of regional dependency, Bulletin of the Seismological Society of America 105:1476-1490, 2015), these exceedances are compared with those predicted by the European Seismic Hazard Model 2013 (ESHM13). In half of the cities, the number of observed exceedances for low intensities (IV and V) is within the range of predictions of ESHM13. In the other half of the cities, the number of observed exceedances is higher than the predictions of ESHM13. For intensity VI, the match is closer, but the comparison is less meaningful due to a scarcity of data. According to this study, the ESHM13 underestimates hazard in roughly half of France, even when taking into account the uncertainty in the conversion from intensity to acceleration. However, these results are valid only for the acceleration range tested in this study (0.01 to 0.09 g).
International Nuclear Information System (INIS)
Berge-Thierry, C.
2007-05-01
The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)
Rey, Julien; Beauval, Céline; Douglas, John
2018-05-01
Probabilistic seismic hazard assessments are the basis of modern seismic design codes. To test fully a seismic hazard curve at the return periods of interest for engineering would require many thousands of years' worth of ground-motion recordings. Because strong-motion networks are often only a few decades old (e.g. in mainland France the first accelerometric network dates from the mid-1990s), data from such sensors can be used to test hazard estimates only at very short return periods. In this article, several hundreds of years of macroseismic intensity observations for mainland France are interpolated using a robust kriging-with-a-trend technique to establish the earthquake history of every French mainland municipality. At 24 selected cities representative of the French seismic context, the number of exceedances of intensities IV, V and VI is determined over time windows considered complete. After converting these intensities to peak ground accelerations using the global conversion equation of Caprio et al. (Ground motion to intensity conversion equations (GMICEs): a global relationship and evaluation of regional dependency, Bulletin of the Seismological Society of America 105:1476-1490, 2015), these exceedances are compared with those predicted by the European Seismic Hazard Model 2013 (ESHM13). In half of the cities, the number of observed exceedances for low intensities (IV and V) is within the range of predictions of ESHM13. In the other half of the cities, the number of observed exceedances is higher than the predictions of ESHM13. For intensity VI, the match is closer, but the comparison is less meaningful due to a scarcity of data. According to this study, the ESHM13 underestimates hazard in roughly half of France, even when taking into account the uncertainty in the conversion from intensity to acceleration. However, these results are valid only for the acceleration range tested in this study (0.01 to 0.09 g).
Iverson, Richard M.; LeVeque, Randall J.
2009-01-01
A recent workshop at the University of Washington focused on mathematical and computational aspects of modeling the dynamics of dense, gravity-driven mass movements such as rock avalanches and debris flows. About 30 participants came from seven countries and brought diverse backgrounds in geophysics; geology; physics; applied and computational mathematics; and civil, mechanical, and geotechnical engineering. The workshop was cosponsored by the U.S. Geological Survey Volcano Hazards Program, by the U.S. National Science Foundation through a Vertical Integration of Research and Education (VIGRE) in the Mathematical Sciences grant to the University of Washington, and by the Pacific Institute for the Mathematical Sciences. It began with a day of lectures open to the academic community at large and concluded with 2 days of focused discussions and collaborative work among the participants.
On-line validation of linear process models using generalized likelihood ratios
International Nuclear Information System (INIS)
Tylee, J.L.
1981-12-01
A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator
Mortality, fertility, and the OY ratio in a model hunter-gatherer system.
White, Andrew A
2014-06-01
An agent-based model (ABM) is used to explore how the ratio of old to young adults (the OY ratio) in a sample of dead individuals is related to aspects of mortality, fertility, and longevity experienced by the living population from which the sample was drawn. The ABM features representations of rules, behaviors, and constraints that affect person- and household-level decisions about marriage, reproduction, and infant mortality in hunter-gatherer systems. The demographic characteristics of the larger model system emerge through human-level interactions playing out in the context of "global" parameters that can be adjusted to produce a range of mortality and fertility conditions. Model data show a relationship between the OY ratios of living populations (the living OY ratio) and assemblages of dead individuals drawn from those populations (the dead OY ratio) that is consistent with that from empirically known ethnographic hunter-gatherer cases. The dead OY ratio is clearly related to the mean ages, mean adult mortality rates, and mean total fertility rates experienced by living populations in the model. Sample size exerts a strong effect on the accuracy with which the calculated dead OY ratio reflects the actual dead OY ratio of the complete assemblage. These results demonstrate that the dead OY ratio is a potentially useful metric for paleodemographic analysis of changes in mortality and mean age, and suggest that, in general, hunter-gatherer populations with higher mortality, higher fertility, and lower mean ages are characterized by lower dead OY ratios. Copyright © 2014 Wiley Periodicals, Inc.
Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling
Directory of Open Access Journals (Sweden)
G. Delmonaco
2003-01-01
Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering
Clark, Renee M; Besterfield-Sacre, Mary E
2009-03-01
We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk.
Nicolae Lerma, Alexandre; Bulteau, Thomas; Elineau, Sylvain; Paris, François; Durand, Paul; Anselme, Brice; Pedreros, Rodrigo
2018-01-01
A modelling chain was implemented in order to propose a realistic appraisal of the risk in coastal areas affected by overflowing as well as overtopping processes. Simulations are performed through a nested downscaling strategy from regional to local scale at high spatial resolution with explicit buildings, urban structures such as sea front walls and hydraulic structures liable to affect the propagation of water in urban areas. Validation of the model performance is based on hard and soft available data analysis and conversion of qualitative to quantitative information to reconstruct the area affected by flooding and the succession of events during two recent storms. Two joint probability approaches (joint exceedance contour and environmental contour) are used to define 100-year offshore conditions scenarios and to investigate the flood response to each scenario in terms of (1) maximum spatial extent of flooded areas, (2) volumes of water propagation inland and (3) water level in flooded areas. Scenarios of sea level rise are also considered in order to evaluate the potential hazard evolution. Our simulations show that for a maximising 100-year hazard scenario, for the municipality as a whole, 38 % of the affected zones are prone to overflow flooding and 62 % to flooding by propagation of overtopping water volume along the seafront. Results also reveal that for the two kinds of statistic scenarios a difference of about 5 % in the forcing conditions (water level, wave height and period) can produce significant differences in terms of flooding like +13.5 % of water volumes propagating inland or +11.3 % of affected surfaces. In some areas, flood response appears to be very sensitive to the chosen scenario with differences of 0.3 to 0.5 m in water level. The developed approach enables one to frame the 100-year hazard and to characterize spatially the robustness or the uncertainty over the results. Considering a 100-year scenario with mean sea level rise (0.6 m), hazard
CyberShake: A Physics-Based Seismic Hazard Model for Southern California
Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.
2011-01-01
CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and
A novel concurrent pictorial choice model of mood-induced relapse in hazardous drinkers.
Hardy, Lorna; Hogarth, Lee
2017-12-01
This study tested whether a novel concurrent pictorial choice procedure, inspired by animal self-administration models, is sensitive to the motivational effect of negative mood induction on alcohol-seeking in hazardous drinkers. Forty-eight hazardous drinkers (scoring ≥7 on the Alcohol Use Disorders Inventory) recruited from the community completed measures of alcohol dependence, depression, and drinking coping motives. Baseline alcohol-seeking was measured by percent choice to enlarge alcohol- versus food-related thumbnail images in two alternative forced-choice trials. Negative and positive mood was then induced in succession by means of self-referential affective statements and music, and percent alcohol choice was measured after each induction in the same way as baseline. Baseline alcohol choice correlated with alcohol dependence severity, r = .42, p = .003, drinking coping motives (in two questionnaires, r = .33, p = .02 and r = .46, p = .001), and depression symptoms, r = .31, p = .03. Alcohol choice was increased by negative mood over baseline (p choice was not related to gender, alcohol dependence, drinking to cope, or depression symptoms (ps ≥ .37). The concurrent pictorial choice measure is a sensitive index of the relative value of alcohol, and provides an accessible experimental model to study negative mood-induced relapse mechanisms in hazardous drinkers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Stochastic modeling of a hazard detection and avoidance maneuver—The planetary landing case
International Nuclear Information System (INIS)
Witte, Lars
2013-01-01
Hazard Detection and Avoidance (HDA) functionalities, thus the ability to recognize and avoid potential hazardous terrain features, is regarded as an enabling technology for upcoming robotic planetary landing missions. In the forefront of any landing mission the landing site safety assessment is an important task in the systems and mission engineering process. To contribute to this task, this paper presents a mathematical framework to consider the HDA strategy and system constraints in this mission engineering aspect. Therefore the HDA maneuver is modeled as a stochastic decision process based on Markov chains to map an initial dispersion at an arrival gate to a new dispersion pattern affected by the divert decision-making and system constraints. The implications for an efficient numerical implementation are addressed. An example case study is given to demonstrate the implementation and use of the proposed scheme
International Nuclear Information System (INIS)
Mendez, W.M. Jr.
1990-01-01
Remediation of hazardous an mixed waste sites is often driven by assessments of human health risks posed by the exposures to hazardous substances released from these sites. The methods used to assess potential health risk involve, either implicitly or explicitly, models for pollutant releases, transport, human exposure and intake, and for characterizing health effects. Because knowledge about pollutant fate transport processes at most waste sites is quite limited, and data cost are quite high, most of the models currently used to assess risk, and endorsed by regulatory agencies, are quite simple. The models employ many simplifying assumptions about pollutant fate and distribution in the environment about human pollutant intake, and toxicologic responses to pollutant exposures. An important consequence of data scarcity and model simplification is that risk estimates are quite uncertain and estimates of the magnitude uncertainty associated with risk assessment has been very difficult. A number of methods have been developed to address the issue of uncertainty in risk assessments in a manner that realistically reflects uncertainty in model specification and data limitations. These methods include definition of multiple exposure scenarios, sensitivity analyses, and explicit probabilistic modeling of uncertainty. Recent developments in this area will be discussed, along with their possible impacts on remediation programs, and remaining obstacles to their wider use and acceptance by the scientific and regulatory communities
Multiple Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California
Pike, Richard J.; Graymer, Russell W.
2008-01-01
With the exception of Los Angeles, perhaps no urban area in the United States is more at risk from landsliding, triggered by either precipitation or earthquake, than the San Francisco Bay region of northern California. By January each year, seasonal winter storms usually bring moisture levels of San Francisco Bay region hillsides to the point of saturation, after which additional heavy rainfall may induce landslides of various types and levels of severity. In addition, movement at any time along one of several active faults in the area may generate an earthquake large enough to trigger landslides. The danger to life and property rises each year as local populations continue to expand and more hillsides are graded for development of residential housing and its supporting infrastructure. The chapters in the text consist of: *Introduction by Russell W. Graymer *Chapter 1 Rainfall Thresholds for Landslide Activity, San Francisco Bay Region, Northern California by Raymond C. Wilson *Chapter 2 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike and Steven Sobieszczyk *Chapter 3 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven Sobieszczyk *Chapter 4 Landslide Hazard Modeled for the Cities of Oakland, Piedmont, and Berkeley, Northern California, from a M=7.1 Scenario Earthquake on the Hayward Fault Zone by Scott B. Miles and David K. Keefer *Chapter 5 Synthesis of Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike The plates consist of: *Plate 1 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike, Russell W. Graymer, Sebastian Roberts, Naomi B. Kalman, and Steven Sobieszczyk *Plate 2 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven
Directory of Open Access Journals (Sweden)
Vahdettin Demir
2016-01-01
Full Text Available In this study, flood hazard maps were prepared for the Mert River Basin, Samsun, Turkey, by using GIS and Hydrologic Engineering Centers River Analysis System (HEC-RAS. In this river basin, human life losses and a significant amount of property damages were experienced in 2012 flood. The preparation of flood risk maps employed in the study includes the following steps: (1 digitization of topographical data and preparation of digital elevation model using ArcGIS, (2 simulation of flood lows of different return periods using a hydraulic model (HEC-RAS, and (3 preparation of flood risk maps by integrating the results of (1 and (2.
International Nuclear Information System (INIS)
Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A.
1994-07-01
This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs
Finite mixture models for the computation of isotope ratios in mixed isotopic samples
Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas
2013-04-01
Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control
ZNJPrice/Earnings Ratio Model through Dividend Yield and Required Yield Above Expected Inflation
Directory of Open Access Journals (Sweden)
Emil Mihalina
2010-07-01
Full Text Available Price/earnings ratio is the most popular and most widespread evaluation model used to assess relative capital asset value on financial markets. In functional terms, company earnings in the very long term can be described with high significance. Empirically, it is visible from long-term statistics that the demanded (required yield on capital markets has certain regularity. Thus, investors first require a yield above the stable inflation rate and then a dividend yield and a capital increase caused by the growth of earnings that influence the price, with the assumption that the P/E ratio is stable. By combining the Gordon model for current dividend value, the model of market capitalization of earnings (price/earnings ratio and bearing in mind the influence of the general price levels on company earnings, it is possible to adjust the price/earnings ratio by deriving a function of the required yield on capital markets measured by a market index through dividend yield and inflation rate above the stable inflation rate increased by profit growth. The S&P 500 index for example, has in the last 100 years grown by exactly the inflation rate above the stable inflation rate increased by profit growth. The comparison of two series of price/earnings ratios, a modelled one and an average 7-year ratio, shows a notable correlation in the movement of two series of variables, with a three year deviation. Therefore, it could be hypothesized that three years of the expected inflation level, dividend yield and profit growth rate of the market index are discounted in the current market prices. The conclusion is that, at the present time, the relationship between the adjusted average price/earnings ratio and its effect on the market index on one hand and the modelled price/earnings ratio on the other can clearly show the expected dynamics and course in the following period.
Veran, Sophie; Beissinger, Steven R
2009-02-01
Skewed sex ratios - operational (OSR) and Adult (ASR) - arise from sexual differences in reproductive behaviours and adult survival rates due to the cost of reproduction. However, skewed sex-ratio at birth, sex-biased dispersal and immigration, and sexual differences in juvenile mortality may also contribute. We present a framework to decompose the roles of demographic traits on sex ratios using perturbation analyses of two-sex matrix population models. Metrics of sensitivity are derived from analyses of sensitivity, elasticity, life-table response experiments and life stage simulation analyses, and applied to the stable stage distribution instead of lambda. We use these approaches to examine causes of male-biased sex ratios in two populations of green-rumped parrotlets (Forpus passerinus) in Venezuela. Female local juvenile survival contributed the most to the unbalanced OSR and ASR due to a female-biased dispersal rate, suggesting sexual differences in philopatry can influence sex ratios more strongly than the cost of reproduction.
Use of agent-based modelling in emergency management under a range of flood hazards
Directory of Open Access Journals (Sweden)
Tagg Andrew
2016-01-01
Full Text Available The Life Safety Model (LSM was developed some 15 years ago, originally for dam break assessments and for informing reservoir evacuation and emergency plans. Alongside other technological developments, the model has evolved into a very useful agent-based tool, with many applications for a range of hazards and receptor behaviour. HR Wallingford became involved in its use in 2006, and is now responsible for its technical development and commercialisation. Over the past 10 years the model has been applied to a range of flood hazards, including coastal surge, river flood, dam failure and tsunami, and has been verified against historical events. Commercial software licences are being used in Canada, Italy, Malaysia and Australia. A core group of LSM users and analysts has been specifying and delivering a programme of model enhancements. These include improvements to traffic behaviour at intersections, new algorithms for sheltering in high-rise buildings, and the addition of monitoring points to allow detailed analysis of vehicle and pedestrian movement. Following user feedback, the ability of LSM to handle large model ‘worlds’ and hydrodynamic meshes has been improved. Recent developments include new documentation, performance enhancements, better logging of run-time events and bug fixes. This paper describes some of the recent developments and summarises some of the case study applications, including dam failure analysis in Japan and mass evacuation simulation in England.
A transparent and data-driven global tectonic regionalization model for seismic hazard assessment
Chen, Yen-Shin; Weatherill, Graeme; Pagani, Marco; Cotton, Fabrice
2018-05-01
A key concept that is common to many assumptions inherent within seismic hazard assessment is that of tectonic similarity. This recognizes that certain regions of the globe may display similar geophysical characteristics, such as in the attenuation of seismic waves, the magnitude scaling properties of seismogenic sources or the seismic coupling of the lithosphere. Previous attempts at tectonic regionalization, particularly within a seismic hazard assessment context, have often been based on expert judgements; in most of these cases, the process for delineating tectonic regions is neither reproducible nor consistent from location to location. In this work, the regionalization process is implemented in a scheme that is reproducible, comprehensible from a geophysical rationale, and revisable when new relevant data are published. A spatial classification-scheme is developed based on fuzzy logic, enabling the quantification of concepts that are approximate rather than precise. Using the proposed methodology, we obtain a transparent and data-driven global tectonic regionalization model for seismic hazard applications as well as the subjective probabilities (e.g. degree of being active/degree of being cratonic) that indicate the degree to which a site belongs in a tectonic category.
The 2018 and 2020 Updates of the U.S. National Seismic Hazard Models
Petersen, M. D.
2017-12-01
During 2018 the USGS will update the 2014 National Seismic Hazard Models by incorporating new seismicity models, ground motion models, site factors, fault inputs, and by improving weights to ground motion models using empirical and other data. We will update the earthquake catalog for the U.S. and introduce new rate models. Additional fault data will be used to improve rate estimates on active faults. New ground motion models (GMMs) and site factors for Vs30 have been released by the Pacific Earthquake Engineering Research Center (PEER) and we will consider these in assessing ground motions in craton and extended margin regions of the central and eastern U.S. The USGS will also include basin-depth terms for selected urban areas of the western United States to improve long-period shaking assessments using published depth estimates to 1.0 and 2.5 km/s shear wave velocities. We will produce hazard maps for input into the building codes that span a broad range of periods (0.1 to 5 s) and site classes (shear wave velocity from 2000 m/s to 200 m/s in the upper 30 m of the crust, Vs30). In the 2020 update we plan on including: a new national crustal model that defines basin depths required in the latest GMMs, new 3-D ground motion simulations for several urban areas, new magnitude-area equations, and new fault geodetic and geologic strain rate models. The USGS will also consider including new 3-D ground motion simulations for inclusion in these long-period maps. These new models are being evaluated and will be discussed at one or more regional and topical workshops held at the beginning of 2018.
Farajzadeh, Manuchehr; Egbal, Mahbobeh Nik
2007-08-15
In this study, the MEDALUS model along with GIS mapping techniques are used to determine desertification hazards for a province of Iran to determine the desertification hazard. After creating a desertification database including 20 parameters, the first steps consisted of developing maps of four indices for the MEDALUS model including climate, soil, vegetation and land use were prepared. Since these parameters have mostly been presented for the Mediterranean region in the past, the next step included the addition of other indicators such as ground water and wind erosion. Then all of the layers weighted by environmental conditions present in the area were used (following the same MEDALUS framework) before a desertification map was prepared. The comparison of two maps based on the original and modified MEDALUS models indicates that the addition of more regionally-specific parameters into the model allows for a more accurate representation of desertification processes across the Iyzad Khast plain. The major factors affecting desertification in the area are climate, wind erosion and low land quality management, vegetation degradation and the salinization of soil and water resources.
TRENT2D WG: a smart web infrastructure for debris-flow modelling and hazard assessment
Zorzi, Nadia; Rosatti, Giorgio; Zugliani, Daniel; Rizzi, Alessandro; Piffer, Stefano
2016-04-01
Mountain regions are naturally exposed to geomorphic flows, which involve large amounts of sediments and induce significant morphological modifications. The physical complexity of this class of phenomena represents a challenging issue for modelling, leading to elaborate theoretical frameworks and sophisticated numerical techniques. In general, geomorphic-flows models proved to be valid tools in hazard assessment and management. However, model complexity seems to represent one of the main obstacles to the diffusion of advanced modelling tools between practitioners and stakeholders, although the UE Flood Directive (2007/60/EC) requires risk management and assessment to be based on "best practices and best available technologies". Furthermore, several cutting-edge models are not particularly user-friendly and multiple stand-alone software are needed to pre- and post-process modelling data. For all these reasons, users often resort to quicker and rougher approaches, leading possibly to unreliable results. Therefore, some effort seems to be necessary to overcome these drawbacks, with the purpose of supporting and encouraging a widespread diffusion of the most reliable, although sophisticated, modelling tools. With this aim, this work presents TRENT2D WG, a new smart modelling solution for the state-of-the-art model TRENT2D (Armanini et al., 2009, Rosatti and Begnudelli, 2013), which simulates debris flows and hyperconcentrated flows adopting a two-phase description over a mobile bed. TRENT2D WG is a web infrastructure joining advantages offered by the software-delivering model SaaS (Software as a Service) and by WebGIS technology and hosting a complete and user-friendly working environment for modelling. In order to develop TRENT2D WG, the model TRENT2D was converted into a service and exposed on a cloud server, transferring computational burdens from the user hardware to a high-performing server and reducing computational time. Then, the system was equipped with an
Anselmetti, Flavio; Hilbe, Michael; Strupler, Michael; Baumgartner, Christoph; Bolz, Markus; Braschler, Urs; Eberli, Josef; Liniger, Markus; Scheiwiller, Peter; Strasser, Michael
2015-04-01
Due to their smaller dimensions and confined bathymetry, lakes act as model oceans that may be used as analogues for the much larger oceans and their margins. Numerous studies in the perialpine lakes of Central Europe have shown that their shores were repeatedly struck by several-meters-high tsunami waves, which were caused by subaquatic slides usually triggered by earthquake shaking. A profound knowledge of these hazards, their intensities and recurrence rates is needed in order to perform thorough tsunami-hazard assessment for the usually densely populated lake shores. In this context, we present results of a study combining i) basinwide slope-stability analysis of subaquatic sediment-charged slopes with ii) identification of scenarios for subaquatic slides triggered by seismic shaking, iii) forward modeling of resulting tsunami waves and iv) mapping of intensity of onshore inundation in populated areas. Sedimentological, stratigraphical and geotechnical knowledge of the potentially unstable sediment drape on the slopes is required for slope-stability assessment. Together with critical ground accelerations calculated from already failed slopes and paleoseismic recurrence rates, scenarios for subaquatic sediment slides are established. Following a previously used approach, the slides are modeled as a Bingham plastic on a 2D grid. The effect on the water column and wave propagation are simulated using the shallow-water equations (GeoClaw code), which also provide data for tsunami inundation, including flow depth, flow velocity and momentum as key variables. Combining these parameters leads to so called «intensity maps» for flooding that provide a link to the established hazard mapping framework, which so far does not include these phenomena. The current versions of these maps consider a 'worst case' deterministic earthquake scenario, however, similar maps can be calculated using probabilistic earthquake recurrence rates, which are expressed in variable amounts of
Aspect Ratio Model for Radiation-Tolerant Dummy Gate-Assisted n-MOSFET Layout.
Lee, Min Su; Lee, Hee Chul
2014-01-01
In order to acquire radiation-tolerant characteristics in integrated circuits, a dummy gate-assisted n-type metal oxide semiconductor field effect transistor (DGA n-MOSFET) layout was adopted. The DGA n-MOSFET has a different channel shape compared with the standard n-MOSFET. The standard n-MOSFET has a rectangular channel shape, whereas the DGA n-MOSFET has an extended rectangular shape at the edge of the source and drain, which affects its aspect ratio. In order to increase its practical use, a new aspect ratio model is proposed for the DGA n-MOSFET and this model is evaluated through three-dimensional simulations and measurements of the fabricated devices. The proposed aspect ratio model for the DGA n-MOSFET exhibits good agreement with the simulation and measurement results.
Estimation of direct effects for survival data by using the Aalen additive hazards model
DEFF Research Database (Denmark)
Martinussen, Torben; Vansteelandt, Stijn; Gerster, Mette
2011-01-01
We extend the definition of the controlled direct effect of a point exposure on a survival outcome, other than through some given, time-fixed intermediate variable, to the additive hazard scale. We propose two-stage estimators for this effect when the exposure is dichotomous and randomly assigned...... Aalen's additive regression for the event time, given exposure, intermediate variable and confounders. The second stage involves applying Aalen's additive model, given the exposure alone, to a modified stochastic process (i.e. a modification of the observed counting process based on the first...
Directory of Open Access Journals (Sweden)
Abdul Salam Soomro
2012-10-01
Full Text Available The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region.
Measurements and models for hazardous chemical and mixed wastes. 1998 annual progress report
International Nuclear Information System (INIS)
Holcomb, C.; Louie, B.; Mullins, M.E.; Outcalt, S.L.; Rogers, T.N.; Watts, L.
1998-01-01
'Aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the US. A large quantity of the waste generated by the US chemical process industry is waste water. In addition, the majority of the waste inventory at DoE sites previously used for nuclear weapons production is aqueous waste. Large quantities of additional aqueous waste are expected to be generated during the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical property information is paramount. This knowledge will lead to huge savings by aiding in the design and optimization of treatment and disposal processes. The main objectives of this project are: Develop and validate models that accurately predict the phase equilibria and thermodynamic properties of hazardous aqueous systems necessary for the safe handling and successful design of separation and treatment processes for hazardous chemical and mixed wastes. Accurately measure the phase equilibria and thermodynamic properties of a representative system (water + acetone + isopropyl alcohol + sodium nitrate) over the applicable ranges of temperature, pressure, and composition to provide the pure component, binary, ternary, and quaternary experimental data required for model development. As of May, 1998, nine months into the first year of a three year project, the authors have made significant progress in the database development, have begun testing the models, and have been performance testing the apparatus on the pure components.'
Tsunami hazard preventing based land use planing model using GIS technique in Muang Krabi, Thailand
International Nuclear Information System (INIS)
Soormo, A.S.
2012-01-01
The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems) based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region. (author)
Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.
2017-01-01
Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.
Spent Fuel Ratio Estimates from Numerical Models in ALE3D
Energy Technology Data Exchange (ETDEWEB)
Margraf, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunn, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-08-02
Potential threat of intentional sabotage of spent nuclear fuel storage facilities is of significant importance to national security. Paramount is the study of focused energy attacks on these materials and the potential release of aerosolized hazardous particulates into the environment. Depleted uranium oxide (DUO_{2}) is often chosen as a surrogate material for testing due to the unreasonable cost and safety demands for conducting full-scale tests with real spent nuclear fuel. To account for differences in mechanical response resulting in changes to particle distribution it is necessary to scale the DUO_{2} results to get a proper measure for spent fuel. This is accomplished with the spent fuel ratio (SFR), the ratio of respirable aerosol mass released due to identical damage conditions between a spent fuel and a surrogate material like depleted uranium oxide (DUO_{2}). A very limited number of full-scale experiments have been carried out to capture this data, and the oft-questioned validity of the results typically leads to overly-conservative risk estimates. In the present work, the ALE3D hydrocode is used to simulate DUO_{2} and spent nuclear fuel pellets impacted by metal jets. The results demonstrate an alternative approach to estimate the respirable release fraction of fragmented nuclear fuel.
CalTOX, a multimedia total exposure model for hazardous-waste sites
International Nuclear Information System (INIS)
McKone, T.E.
1993-06-01
CalTOX has been developed as a spreadsheet model to assist in health-risk assessments that address contaminated soils and the contamination of adjacent air, surface water, sediments, and ground water. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify and reduce uncertainty in multimedia, multiple-pathway exposure models. This report provides an overview of the CalTOX model components, lists the objectives of the model, describes the philosophy under which the model was developed, identifies the chemical classes for which the model can be used, and describes critical sensitivities and uncertainties. The multimedia transport and transformation model is a dynamic model that can be used to assess time-varying concentrations of contaminants introduced initially to soil layers or for contaminants released continuously to air or water. This model assists the user in examining how chemical and landscape properties impact both the ultimate route and quantity of human contact. Multimedia, multiple pathway exposure models are used in the CalTOX model to estimate average daily potential doses within a human population in the vicinity of a hazardous substances release site. The exposure models encompass twenty-three exposure pathways. The exposure assessment process consists of relating contaminant concentrations in the multimedia model compartments to contaminant concentrations in the media with which a human population has contact (personal air, tap water, foods, household dusts soils, etc.). The average daily dose is the product of the exposure concentrations in these contact media and an intake or uptake factor that relates the concentrations to the distributions of potential dose within the population
Regression analysis of informative current status data with the additive hazards model.
Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo
2015-04-01
This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.
Validation of individual and aggregate global flood hazard models for two major floods in Africa.
Trigg, M.; Bernhofen, M.; Whyman, C.
2017-12-01
A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.
Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine
Directory of Open Access Journals (Sweden)
Hang-cheong Wong
2012-01-01
Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.
Geospatial modeling of plant stable isotope ratios - the development of isoscapes
West, J. B.; Ehleringer, J. R.; Hurley, J. M.; Cerling, T. E.
2007-12-01
Large-scale spatial variation in stable isotope ratios can yield critical insights into the spatio-temporal dynamics of biogeochemical cycles, animal movements, and shifts in climate, as well as anthropogenic activities such as commerce, resource utilization, and forensic investigation. Interpreting these signals requires that we understand and model the variation. We report progress in our development of plant stable isotope ratio landscapes (isoscapes). Our approach utilizes a GIS, gridded datasets, a range of modeling approaches, and spatially distributed observations. We synthesize findings from four studies to illustrate the general utility of the approach, its ability to represent observed spatio-temporal variability in plant stable isotope ratios, and also outline some specific areas of uncertainty. We also address two basic, but critical questions central to our ability to model plant stable isotope ratios using this approach: 1. Do the continuous precipitation isotope ratio grids represent reasonable proxies for plant source water?, and 2. Do continuous climate grids (as is or modified) represent a reasonable proxy for the climate experienced by plants? Plant components modeled include leaf water, grape water (extracted from wine), bulk leaf material ( Cannabis sativa; marijuana), and seed oil ( Ricinus communis; castor bean). Our approaches to modeling the isotope ratios of these components varied from highly sophisticated process models to simple one-step fractionation models to regression approaches. The leaf water isosocapes were produced using steady-state models of enrichment and continuous grids of annual average precipitation isotope ratios and climate. These were compared to other modeling efforts, as well as a relatively sparse, but geographically distributed dataset from the literature. The latitudinal distributions and global averages compared favorably to other modeling efforts and the observational data compared well to model predictions
A "mental models" approach to the communication of subsurface hydrology and hazards
Gibson, Hazel; Stewart, Iain S.; Pahl, Sabine; Stokes, Alison
2016-05-01
Communicating information about geological and hydrological hazards relies on appropriately worded communications targeted at the needs of the audience. But what are these needs, and how does the geoscientist discern them? This paper adopts a psychological "mental models" approach to assess the public perception of the geological subsurface, presenting the results of attitudinal studies and surveys in three communities in the south-west of England. The findings reveal important preconceptions and misconceptions regarding the impact of hydrological systems and hazards on the geological subsurface, notably in terms of the persistent conceptualisation of underground rivers and the inferred relations between flooding and human activity. The study demonstrates how such mental models can provide geoscientists with empirical, detailed and generalised data of perceptions surrounding an issue, as well reveal unexpected outliers in perception that they may not have considered relevant, but which nevertheless may locally influence communication. Using this approach, geoscientists can develop information messages that more directly engage local concerns and create open engagement pathways based on dialogue, which in turn allow both geoscience "experts" and local "non-experts" to come together and understand each other more effectively.
Household hazardous waste disposal to landfill: Using LandSim to model leachate migration
International Nuclear Information System (INIS)
Slack, Rebecca J.; Gronow, Jan R.; Hall, David H.; Voulvoulis, Nikolaos
2007-01-01
Municipal solid waste (MSW) landfill leachate contains a number of aquatic pollutants. A specific MSW stream often referred to as household hazardous waste (HHW) can be considered to contribute a large proportion of these pollutants. This paper describes the use of the LandSim (Landfill Performance Simulation) modelling program to assess the environmental consequences of leachate release from a generic MSW landfill in receipt of co-disposed HHW. Heavy metals and organic pollutants were found to migrate into the zones beneath a model landfill site over a 20,000-year period. Arsenic and chromium were found to exceed European Union and US-EPA drinking water standards at the unsaturated zone/aquifer interface, with levels of mercury and cadmium exceeding minimum reporting values (MRVs). The findings demonstrate the pollution potential arising from HHW disposal with MSW. - Aquatic pollutants linked to the disposal of household hazardous waste in municipal landfills have the potential to exist in soil and groundwater for many years
Carreau, J.; Naveau, P.; Neppel, L.
2017-05-01
The French Mediterranean is subject to intense precipitation events occurring mostly in autumn. These can potentially cause flash floods, the main natural danger in the area. The distribution of these events follows specific spatial patterns, i.e., some sites are more likely to be affected than others. The peaks-over-threshold approach consists in modeling extremes, such as heavy precipitation, by the generalized Pareto (GP) distribution. The shape parameter of the GP controls the probability of extreme events and can be related to the hazard level of a given site. When interpolating across a region, the shape parameter should reproduce the observed spatial patterns of the probability of heavy precipitation. However, the shape parameter estimators have high uncertainty which might hide the underlying spatial variability. As a compromise, we choose to let the shape parameter vary in a moderate fashion. More precisely, we assume that the region of interest can be partitioned into subregions with constant hazard level. We formalize the model as a conditional mixture of GP distributions. We develop a two-step inference strategy based on probability weighted moments and put forward a cross-validation procedure to select the number of subregions. A synthetic data study reveals that the inference strategy is consistent and not very sensitive to the selected number of subregions. An application on daily precipitation data from the French Mediterranean shows that the conditional mixture of GPs outperforms two interpolation approaches (with constant or smoothly varying shape parameter).
Clinical trials: odds ratios and multiple regression models--why and how to assess them
Sobh, Mohamad; Cleophas, Ton J.; Hadj-Chaib, Amel; Zwinderman, Aeilko H.
2008-01-01
Odds ratios (ORs), unlike chi2 tests, provide direct insight into the strength of the relationship between treatment modalities and treatment effects. Multiple regression models can reduce the data spread due to certain patient characteristics and thus improve the precision of the treatment
Directory of Open Access Journals (Sweden)
Pieter-Jan Vlok
2012-01-01
Full Text Available
ENGLISH ABSTRACT: Increased competitiveness in the production world necessitates improved maintenance strategies to increase availabilities and drive down cost . The maintenance engineer is thus faced with the need to make more intelligent pre ventive renewal decisions . Two of the main techniques to achieve this is through Condition Monitoring (such as vibrat ion monitoring and oil anal ysis and Statistical Failure Analysis (typically using probabilistic techniques . The present paper discusses these techniques, their uses and weaknesses and then presents th e Proportional Hazard Model as an solution to most of these weaknesses. It then goes on to compare the results of the different techniques in monetary terms, using a South African case study. This comparison shows clearly that the Proportional Hazards Model is sup erior to the present t echniques and should be the preferred model for many actual maintenance situations.
AFRIKAANSE OPSOMMING: Verhoogde vlakke van mededinging in die produksie omgewing noodsaak verbeterde instandhouding strategies om beskikbaarheid van toerusting te verhoog en koste te minimeer. Instandhoudingsingenieurs moet gevolglik meer intellegente voorkomende hernuwings besluite neem. Twee prominente tegnieke om hierdie doelwit te bereik is Toestandsmonitering (soos vibrasie monitering of olie analise en Statistiese Falingsanalise (gewoonlik m.b.v. probabilistiese metodes. In hierdie artikel beskou ons beide hierdie tegnieke, hulle gebruike en tekortkominge en stel dan die Proporsionele Gevaarkoers Model voor as 'n oplossing vir meeste van die tekortkominge. Die artikel vergelyk ook die verskillende tegnieke in geldelike terme deur gebruik te maak van 'n Suid-Afrikaanse gevalle studie. Hierdie vergelyking wys duidelik-uit dat die Proporsionele Gevaarkoers Model groter beloft e inhou as die huidige tegni eke en dat dit die voorkeur oplossing behoort te wees in baie werklike instandhoudings situasies.
Socio-economic vulnerability to natural hazards - proposal for an indicator-based model
Eidsvig, U.; McLean, A.; Vangelsten, B. V.; Kalsnes, B.; Ciurean, R. L.; Argyroudis, S.; Winter, M.; Corominas, J.; Mavrouli, O. C.; Fotopoulou, S.; Pitilakis, K.; Baills, A.; Malet, J. P.
2012-04-01
Vulnerability assessment, with respect to natural hazards, is a complex process that must consider multiple dimensions of vulnerability, including both physical and social factors. Physical vulnerability refers to conditions of physical assets, and may be modeled by the intensity and magnitude of the hazard, the degree of physical protection provided by the natural and built environment, and the physical robustness of the exposed elements. Social vulnerability refers to the underlying factors leading to the inability of people, organizations, and societies to withstand impacts from the natural hazards. Social vulnerability models can be used in combination with physical vulnerability models to estimate both direct losses, i.e. losses that occur during and immediately after the impact, as well as indirect losses, i.e. long-term effects of the event. Direct impact of a landslide typically includes casualties and damages to buildings and infrastructure while indirect losses may e.g. include business closures or limitations in public services. The direct losses are often assessed using physical vulnerability indicators (e.g. construction material, height of buildings), while indirect losses are mainly assessed using social indicators (e.g. economical resources, demographic conditions). Within the EC-FP7 SafeLand research project, an indicator-based method was proposed to assess relative socio-economic vulnerability to landslides. The indicators represent the underlying factors which influence a community's ability to prepare for, deal with, and recover from the damage associated with landslides. The proposed model includes indicators representing demographic, economic and social characteristics as well as indicators representing the degree of preparedness and recovery capacity. Although the model focuses primarily on the indirect losses, it could easily be extended to include more physical indicators which account for the direct losses. Each indicator is individually
High resolution global flood hazard map from physically-based hydrologic and hydraulic models.
Begnudelli, L.; Kaheil, Y.; McCollum, J.
2017-12-01
The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak
Model-free approach to the estimation of radiation hazards. I. Theory
International Nuclear Information System (INIS)
Zaider, M.; Brenner, D.J.
1986-01-01
The experience of the Japanese atomic bomb survivors constitutes to date the major data base for evaluating the effects of low doses of ionizing radiation on human populations. Although numerous analyses have been performed and published concerning this experience, it is clear that no consensus has emerged as to the conclusions that may be drawn to assist in setting realistic radiation protection guidelines. In part this is an inherent consequences of the rather limited amount of data available. In this paper the authors address an equally important problem; namely, the use of arbitrary parametric risk models which have little theoretical foundation, yet almost totally determine the final conclusions drawn. They propose the use of a model-free approach to the estimation of radiation hazards
FLOOD HAZARD MAP IN THE CITY OF BATNA (ALGERIA BY HYDRAULIC MODELING APPROCH
Directory of Open Access Journals (Sweden)
Guellouh SAMI
2016-06-01
Full Text Available In the light of the global climatic changes that appear to influence the frequency and the intensity of floods, and whose damages are still growing; understanding the hydrological processes, their spatiotemporal setting and their extreme shape, became a paramount concern to local communities in forecasting terms. The aim of this study is to map the floods hazard using a hydraulic modeling method. In fact, using the operating Geographic Information System (GIS, would allow us to perform a more detailed spatial analysis about the extent of the flooding risk, through the approval of the hydraulic modeling programs in different frequencies. Based on the results of this analysis, decision makers can implement a strategy of risk management related to rivers overflowing through the city of Batna.
Proportional hazards model with varying coefficients for length-biased data.
Zhang, Feipeng; Chen, Xuerong; Zhou, Yong
2014-01-01
Length-biased data arise in many important applications including epidemiological cohort studies, cancer prevention trials and studies of labor economics. Such data are also often subject to right censoring due to loss of follow-up or the end of study. In this paper, we consider a proportional hazards model with varying coefficients for right-censored and length-biased data, which is used to study the interact effect nonlinearly of covariates with an exposure variable. A local estimating equation method is proposed for the unknown coefficients and the intercept function in the model. The asymptotic properties of the proposed estimators are established by using the martingale theory and kernel smoothing techniques. Our simulation studies demonstrate that the proposed estimators have an excellent finite-sample performance. The Channing House data is analyzed to demonstrate the applications of the proposed method.
Household hazardous waste disposal to landfill: using LandSim to model leachate migration.
Slack, Rebecca J; Gronow, Jan R; Hall, David H; Voulvoulis, Nikolaos
2007-03-01
Municipal solid waste (MSW) landfill leachate contains a number of aquatic pollutants. A specific MSW stream often referred to as household hazardous waste (HHW) can be considered to contribute a large proportion of these pollutants. This paper describes the use of the LandSim (Landfill Performance Simulation) modelling program to assess the environmental consequences of leachate release from a generic MSW landfill in receipt of co-disposed HHW. Heavy metals and organic pollutants were found to migrate into the zones beneath a model landfill site over a 20,000-year period. Arsenic and chromium were found to exceed European Union and US-EPA drinking water standards at the unsaturated zone/aquifer interface, with levels of mercury and cadmium exceeding minimum reporting values (MRVs). The findings demonstrate the pollution potential arising from HHW disposal with MSW.
Energy Technology Data Exchange (ETDEWEB)
Suzette Payne
2006-04-01
This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.
Energy Technology Data Exchange (ETDEWEB)
Suzette Payne
2007-08-01
This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.
International Nuclear Information System (INIS)
Hopkins, Philip F.; Hernquist, Lars
2009-01-01
We use the observed distribution of Eddington ratios as a function of supermassive black hole (BH) mass to constrain models of quasar/active galactic nucleus (AGN) lifetimes and light curves. Given the observed (well constrained) AGN luminosity function, a particular model for AGN light curves L(t) or, equivalently, the distribution of AGN lifetimes (time above a given luminosity t(>L)) translates directly and uniquely (without further assumptions) to a predicted distribution of Eddington ratios at each BH mass. Models for self-regulated BH growth, in which feedback produces a self-regulating 'decay' or 'blowout' phase after the AGN reaches some peak luminosity/BH mass and begins to expel gas and shut down accretion, make specific predictions for the light curves/lifetimes, distinct from, e.g., the expected distribution if AGN simply shut down by gas starvation (without feedback) and very different from the prediction of simple phenomenological 'light bulb' scenarios. We show that the present observations of the Eddington ratio distribution, spanning nearly 5 orders of magnitude in Eddington ratio, 3 orders of magnitude in BH mass, and redshifts z = 0-1, agree well with the predictions of self-regulated models, and rule out phenomenological 'light bulb' or pure exponential models, as well as gas starvation models, at high significance (∼5σ). We also compare with observations of the distribution of Eddington ratios at a given AGN luminosity, and find similar good agreement (but show that these observations are much less constraining). We fit the functional form of the quasar lifetime distribution and provide these fits for use, and show how the Eddington ratio distributions place precise, tight limits on the AGN lifetimes at various luminosities, in agreement with model predictions. We compare with independent estimates of episodic lifetimes and use this to constrain the shape of the typical AGN light curve, and provide simple analytic fits to these for use in
MODELING AND FORECASTING THE GROSS ENROLLMENT RATIO IN ROMANIAN PRIMARY SCHOOL
Directory of Open Access Journals (Sweden)
MARINOIU CRISTIAN
2014-06-01
Full Text Available The gross enrollment ratio in primary school is one of the basic indicators used in order to evaluate the proposed objectives of the educational system. Knowing its evolution allows a more rigorous substantiation of the strategies and of the human resources politics not only from the educational field but also from the economic one. In this paper we propose an econometric model in order to describe the gross enrollment ratio in Romanian primary school and we achieve its prediction for the next years, having as a guide the Box-Jenkins’s methodology. The obtained results indicate the continuous decrease of this rate for the next years.
Cosmic ray muon charge ratio derived from the new scaling variable model
Bhattacharya, D P
1980-01-01
The charge ratio of sea level muons has been estimated from the new scaling variable model and the CERN Intersecting Storage Ring data of Capiluppi et al. (1974) for pp to pi /sup +or-/X and pp to K/sup +or- /X inclusive reactions. The estimated muon charge ratio is found to be 1.21 and the result has been compared with the experimental data of Parker et al. (1969), Burnet et al. (1973), Ashley et al., and Muraki et al. (1979). (20 refs).
Duality-mediated critical amplitude ratios for the (2 + 1)-dimensional S = 1XY model
Nishiyama, Yoshihiro
2017-09-01
The phase transition for the (2 + 1)-dimensional spin-S = 1XY model was investigated numerically. Because of the boson-vortex duality, the spin stiffness ρs in the ordered phase and the vortex-condensate stiffness ρv in the disordered phase should have a close relationship. We employed the exact diagonalization method, which yields the excitation gap directly. As a result, we estimate the amplitude ratios ρs,v/Δ (Δ: Mott insulator gap) by means of the scaling analyses for the finite-size cluster with N ≤ 22 spins. The ratio ρs/ρv admits a quantitative measure of deviation from selfduality.
Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.
2013-11-01
El Salvador is the smallest and most densely populated country in Central America; its coast has an approximate length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there were 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and resulting in hundreds of victims. Hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached through both probabilistic and deterministic methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold: on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high-resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps, and from the elevation in the near shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific Basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences-finite volumes numerical model in this work, based on the linear and non-linear shallow water equations, to simulate a total of 24 earthquake-generated tsunami scenarios. Our results show that at the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results
Issues in testing the new national seismic hazard model for Italy
Stein, S.; Peresan, A.; Kossobokov, V. G.; Brooks, E. M.; Spencer, B. D.
2016-12-01
It is important to bear in mind that we know little about how earthquake hazard maps actually describe the shaking that will actually occur in the future, and have no agreed way of assessing how well a map performed in the past, and, thus, whether one map performs better than another. Moreover, we should not forget that different maps can be useful for different end users, who may have different cost-and-benefit strategies. Thus, regardless of the specific tests we chose to use, the adopted testing approach should have several key features: We should assess map performance using all the available instrumental, paleo seismology, and historical intensity data. Instrumental data alone span a period much too short to capture the largest earthquakes - and thus strongest shaking - expected from most faults. We should investigate what causes systematic misfit, if any, between the longest record we have - historical intensity data available for the Italian territory from 217 B.C. to 2002 A.D. - and a given hazard map. We should compare how seismic hazard maps developed over time. How do the most recent maps for Italy compare to earlier ones? It is important to understand local divergences that show how the models are developing to the most recent one. The temporal succession of maps is important: we have to learn from previous errors. We should use the many different tests that have been proposed. All are worth trying, because different metrics of performance show different aspects of how a hazard map performs and can be used. We should compare other maps to the ones we are testing. Maps can be made using a wide variety of assumptions, which will lead to different predicted shaking. It is possible that maps derived by other approaches may perform better. Although Italian current codes are based on probabilistic maps, it is important from both a scientific and societal perspective to look at all options including deterministic scenario based ones. Comparing what works
A Three End-Member Mixing Model Based on Isotopic Composition and Elemental Ratio
Directory of Open Access Journals (Sweden)
Kon-Kee Liu Shuh-Ji Kao
2007-01-01
Full Text Available A three end-member mixing model based on nitrogen isotopic composition and organic carbon to nitrogen ratio of suspended particulate matter in an aquatic environment has been developed. Mathematical expressions have been derived for the calculation of the fractions of nitrogen or organic carbon originating from three different sources of distinct isotopic and elemental compositions. The model was successfully applied to determine the contributions from anthropogenic wastes, soils and bedrock-derived sediments to particulate nitrogen and particulate organic carbon in the Danshuei River during the flood caused by Typhoon Bilis in August 2000. The model solutions have been expressed in a general form that allows applications to mixtures with other types of isotopic compositions and elemental ratios or in forms other than suspended particulate matter.
A void ratio dependent water retention curve model including hydraulic hysteresis
Directory of Open Access Journals (Sweden)
Pasha Amin Y.
2016-01-01
Full Text Available Past experimental evidence has shown that Water Retention Curve (WRC evolves with mechanical stress and structural changes in soil matrix. Models currently available in the literature for capturing the volume change dependency of WRC are mainly empirical in nature requiring an extensive experimental programme for parameter identification which renders them unsuitable for practical applications. In this paper, an analytical model for the evaluation of the void ratio dependency of WRC in deformable porous media is presented. The approach proposed enables quantification of the dependency of WRC on void ratio solely based on the form of WRC at the reference void ratio and requires no additional parameters. The effect of hydraulic hysteresis on the evolution process is also incorporated in the model, an aspect rarely addressed in the literature. Expressions are presented for the evolution of main and scanning curves due to loading and change in the hydraulic path from scanning to main wetting/drying and vice versa as well as the WRC parameters such as air entry value, air expulsion value, pore size distribution index and slope of the scanning curve. The model is validated using experimental data on compacted and reconstituted soils subjected to various hydro-mechanical paths. Good agreement is obtained between model predictions and experimental data in all the cases considered.
Turchaninova, A.
2012-04-01
The estimation of extreme avalanche runout distances, flow velocities, impact pressures and volumes is an essential part of snow engineering in mountain regions of Russia. It implies the avalanche hazard assessment and mapping. Russian guidelines accept the application of different avalanche models as well as approaches for the estimation of model input parameters. Consequently different teams of engineers in Russia apply various dynamics and statistical models for engineering practice. However it gives more freedom to avalanche practitioners and experts but causes lots of uncertainties in case of serious limitations of avalanche models. We discuss these problems by presenting the application results of different well known and widely used statistical (developed in Russia) and avalanche dynamics models for several avalanche test sites in the Khibini Mountains (The Kola Peninsula) and the Caucasus. The most accurate and well-documented data from different powder and wet, big rare and small frequent snow avalanche events is collected from 1960th till today in the Khibini Mountains by the Avalanche Safety Center of "Apatit". This data was digitized and is available for use and analysis. Then the detailed digital avalanche database (GIS) was created for the first time. It contains contours of observed avalanches (ESRI shapes, more than 50 years of observations), DEMs, remote sensing data, description of snow pits, photos etc. Thus, the Russian avalanche data is a unique source of information for understanding of an avalanche flow rheology and the future development and calibration of the avalanche dynamics models. GIS database was used to analyze model input parameters and to calibrate and verify avalanche models. Regarding extreme dynamic parameters the outputs using different models can differ significantly. This is unacceptable for the engineering purposes in case of the absence of the well-defined guidelines in Russia. The frequency curves for the runout distance
An enhanced fire hazard assessment model and validation experiments for vertical cable trays
International Nuclear Information System (INIS)
Li, Lu; Huang, Xianjia; Bi, Kun; Liu, Xiaoshuang
2016-01-01
Highlights: • An enhanced model was developed for vertical cable fire hazard assessment in NPP. • The validated experiments on vertical cable tray fires were conducted. • The capability of the model for cable tray with different cable spacing were tested. - Abstract: The model, referred to as FLASH-CAT (Flame Spread over Horizontal Cable Trays), was developed to estimate the heat release rate for vertical cable tray fire. The focus of this work is to investigate the application of an enhanced model to the single vertical cable tray fires with different cable spacing. The experiments on vertical cable tray fires with three typical cable spacing were conducted. The histories of mass loss rate and flame length were recorded during the cable fire. From the experimental results, it is found that the space between cable lines intensifies the cable combustion and accelerates the flame spread. The predictions by the enhanced model show good agreements with the experimental data. At the same time, it is shown that the enhanced model is capable of predicting the different behaviors of cable fires with different cable spacing by adjusting the flame spread speed only.
An enhanced fire hazard assessment model and validation experiments for vertical cable trays
Energy Technology Data Exchange (ETDEWEB)
Li, Lu [Sate Key Laboratory of Fire Science, University of Science and Technology of China, Hefei 230027 (China); Huang, Xianjia, E-mail: huangxianjia@gziit.ac.cn [Joint Laboratory of Fire Safety in Nuclear Power Plants, Institute of Industry Technology Guangzhou & Chinese Academy of Sciences, Guangzhou 511458 (China); Bi, Kun; Liu, Xiaoshuang [China Nuclear Power Design Co., Ltd., Shenzhen 518045 (China)
2016-05-15
Highlights: • An enhanced model was developed for vertical cable fire hazard assessment in NPP. • The validated experiments on vertical cable tray fires were conducted. • The capability of the model for cable tray with different cable spacing were tested. - Abstract: The model, referred to as FLASH-CAT (Flame Spread over Horizontal Cable Trays), was developed to estimate the heat release rate for vertical cable tray fire. The focus of this work is to investigate the application of an enhanced model to the single vertical cable tray fires with different cable spacing. The experiments on vertical cable tray fires with three typical cable spacing were conducted. The histories of mass loss rate and flame length were recorded during the cable fire. From the experimental results, it is found that the space between cable lines intensifies the cable combustion and accelerates the flame spread. The predictions by the enhanced model show good agreements with the experimental data. At the same time, it is shown that the enhanced model is capable of predicting the different behaviors of cable fires with different cable spacing by adjusting the flame spread speed only.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.
Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N
2016-01-01
Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.
Modeling fault rupture hazard for the proposed repository at Yucca Mountain, Nevada
International Nuclear Information System (INIS)
Coppersmith, K.J.; Youngs, R.R.
1992-01-01
In this paper as part of the Electric Power Research Institute's High Level Waste program, the authors have developed a preliminary probabilistic model for assessing the hazard of fault rupture to the proposed high level waste repository at Yucca Mountain. The model is composed of two parts: the earthquake occurrence model that describes the three-dimensional geometry of earthquake sources and the earthquake recurrence characteristics for all sources in the site vicinity; and the rupture model that describes the probability of coseismic fault rupture of various lengths and amounts of displacement within the repository horizon 350 m below the surface. The latter uses empirical data from normal-faulting earthquakes to relate the rupture dimensions and fault displacement amounts to the magnitude of the earthquake. using a simulation procedure, we allow for earthquake occurrence on all of the earthquake sources in the site vicinity, model the location and displacement due to primary faults, and model the occurrence of secondary faulting in conjunction with primary faulting
Trimming a hazard logic tree with a new model-order-reduction technique
Porter, Keith; Field, Edward; Milner, Kevin R
2017-01-01
The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.
Constraints on the tensor-to-scalar ratio for non-power-law models
International Nuclear Information System (INIS)
Vázquez, J. Alberto; Bridges, M.; Ma, Yin-Zhe; Hobson, M.P.
2013-01-01
Recent cosmological observations hint at a deviation from the simple power-law form of the primordial spectrum of curvature perturbations. In this paper we show that in the presence of a tensor component, a turn-over in the initial spectrum is preferred by current observations, and hence non-power-law models ought to be considered. For instance, for a power-law parameterisation with both a tensor component and running parameter, current data show a preference for a negative running at more than 2.5σ C.L. As a consequence of this deviation from a power-law, constraints on the tensor-to-scalar ratio r are slightly broader. We also present constraints on the inflationary parameters for a model-independent reconstruction and the Lasenby and Doran (LD) model. In particular, the constraints on the tensor-to-scalar ratio from the LD model are: r LD = 0.11±0.024. In addition to current data, we show expected constraints from Planck-like and CMB-Pol sensitivity experiments by using Markov-Chain-Monte-Carlo sampling chains. For all the models, we have included the Bayesian Evidence to perform a model selection analysis. The Bayes factor, using current observations, shows a strong preference for the LD model over the standard power-law parameterisation, and provides an insight into the accuracy of differentiating models through future surveys
A prediction model for wind speed ratios at pedestrian level with simplified urban canopies
Ikegaya, N.; Ikeda, Y.; Hagishima, A.; Razak, A. A.; Tanimoto, J.
2017-02-01
The purpose of this study is to review and improve prediction models for wind speed ratios at pedestrian level with simplified urban canopies. We adopted an extensive database of velocity fields under various conditions for arrays consisting of cubes, slender or flattened rectangles, and rectangles with varying roughness heights. Conclusions are summarized as follows: first, a new geometric parameter is introduced as a function of the plan area index and the aspect ratio so as to express the increase in virtual density that causes wind speed reduction. Second, the estimated wind speed ratios in the range 0.05 coefficients between the wind speeds averaged over the entire region, and the front or side region values are larger than 0.8. In contrast, in areas where the influence of roughness elements is significant, such as behind a building, the wind speeds are weakly correlated.
Chai, E W; H'ng, P S; Peng, S H; Wan-Azha, W M; Chin, K L; Chow, M J; Wong, W Z
2013-01-01
In Malaysia, large amounts of organic materials, which lead to disposal problems, are generated from agricultural residues especially from palm oil industries. Increasing landfill costs and regulations, which limit many types of waste accepted at landfills, have increased the interest in composting as a component of waste management. The objectives of this study were to characterize compost feedstock properties of common organic waste materials available in Malaysia. Thus, a ratio modelling of matching ingredients for empty fruit bunches (EFBs) co-composting using different organic materials in Malaysia was done. Organic waste materials with a C/N ratio of composting. The outcome of this study suggested that the percentage of EFB ranged between 50% and 60%, which is considered as the ideal mixing ratio in EFB co-composting. Conclusively, EFB can be utilized in composting if appropriate feedstock in term of physical and chemical characteristics is coordinated in the co-composting process.
Statistical modeling and MAP estimation for body fat quantification with MRI ratio imaging
Wong, Wilbur C. K.; Johnson, David H.; Wilson, David L.
2008-03-01
We are developing small animal imaging techniques to characterize the kinetics of lipid accumulation/reduction of fat depots in response to genetic/dietary factors associated with obesity and metabolic syndromes. Recently, we developed an MR ratio imaging technique that approximately yields lipid/{lipid + water}. In this work, we develop a statistical model for the ratio distribution that explicitly includes a partial volume (PV) fraction of fat and a mixture of a Rician and multiple Gaussians. Monte Carlo hypothesis testing showed that our model was valid over a wide range of coefficient of variation of the denominator distribution (c.v.: 0-0:20) and correlation coefficient among the numerator and denominator (ρ 0-0.95), which cover the typical values that we found in MRI data sets (c.v.: 0:027-0:063, ρ: 0:50-0:75). Then a maximum a posteriori (MAP) estimate for the fat percentage per voxel is proposed. Using a digital phantom with many PV voxels, we found that ratio values were not linearly related to PV fat content and that our method accurately described the histogram. In addition, the new method estimated the ground truth within +1.6% vs. +43% for an approach using an uncorrected ratio image, when we simply threshold the ratio image. On the six genetically obese rat data sets, the MAP estimate gave total fat volumes of 279 +/- 45mL, values 21% smaller than those from the uncorrected ratio images, principally due to the non-linear PV effect. We conclude that our algorithm can increase the accuracy of fat volume quantification even in regions having many PV voxels, e.g. ectopic fat depots.
Modeling the bathtub shape hazard rate function in terms of reliability
International Nuclear Information System (INIS)
Wang, K.S.; Hsu, F.S.; Liu, P.P.
2002-01-01
In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man-machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1-R. This representation denotes the memory characteristics of the second failure cause. Man-machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted 'bathtub' procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons
Faruk, Alfensi
2018-03-01
Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.
Two-Part Models for Fractional Responses Defined as Ratios of Integers
Directory of Open Access Journals (Sweden)
Harald Oberhofer
2014-09-01
Full Text Available This paper discusses two alternative two-part models for fractional response variables that are defined as ratios of integers. The first two-part model assumes a Binomial distribution and known group size. It nests the one-part fractional response model proposed by Papke and Wooldridge (1996 and, thus, allows one to apply Wald, LM and/or LR tests in order to discriminate between the two models. The second model extends the first one by allowing for overdispersion in the data. We demonstrate the usefulness of the proposed two-part models for data on the 401(k pension plan participation rates used in Papke and Wooldridge (1996.
Mergili, Martin; Schneider, Demian; Andres, Norina; Worni, Raphael; Gruber, Fabian; Schneider, Jean F.
2010-05-01
Lake Outburst Floods can evolve from complex process chains like avalanches of rock or ice that produce flood waves in a lake which may overtop and eventually breach glacial, morainic, landslide, or artificial dams. Rising lake levels can lead to progressive incision and destabilization of a dam, to enhanced ground water flow (piping), or even to hydrostatic failure of ice dams which can cause sudden outflow of accumulated water. These events often have a highly destructive potential because a large amount of water is released in a short time, with a high capacity to erode loose debris, leading to a powerful debris flow with a long travel distance. The best-known example of a lake outburst flood is the Vajont event (Northern Italy, 1963), where a landslide rushed into an artificial lake which spilled over and caused a flood leading to almost 2000 fatalities. Hazards from the failure of landslide dams are often (not always) fairly manageable: most breaches occur in the first few days or weeks after the landslide event and the rapid construction of a spillway - though problematic - has solved some hazardous situations (e.g. in the case of Hattian landslide in 2005 in Pakistan). Older dams, like Usoi dam (Lake Sarez) in Tajikistan, are usually fairly stable, though landsildes into the lakes may create floodwaves overtopping and eventually weakening the dams. The analysis and the mitigation of glacial lake outburst flood (GLOF) hazard remains a challenge. A number of GLOFs resulting in fatalities and severe damage have occurred during the previous decades, particularly in the Himalayas and in the mountains of Central Asia (Pamir, Tien Shan). The source area is usually far away from the area of impact and events occur at very long intervals or as singularities, so that the population at risk is usually not prepared. Even though potentially hazardous lakes can be identified relatively easily with remote sensing and field work, modeling and predicting of GLOFs (and also
A hypothetical model for predicting the toxicity of high aspect ratio nanoparticles (HARN)
Tran, C. L.; Tantra, R.; Donaldson, K.; Stone, V.; Hankin, S. M.; Ross, B.; Aitken, R. J.; Jones, A. D.
2011-12-01
The ability to predict nanoparticle (dimensional structures which are less than 100 nm in size) toxicity through the use of a suitable model is an important goal if nanoparticles are to be regulated in terms of exposures and toxicological effects. Recently, a model to predict toxicity of nanoparticles with high aspect ratio has been put forward by a consortium of scientists. The High aspect ratio nanoparticles (HARN) model is a platform that relates the physical dimensions of HARN (specifically length and diameter ratio) and biopersistence to their toxicity in biological environments. Potentially, this model is of great public health and economic importance, as it can be used as a tool to not only predict toxicological activity but can be used to classify the toxicity of various fibrous nanoparticles, without the need to carry out time-consuming and expensive toxicology studies. However, this model of toxicity is currently hypothetical in nature and is based solely on drawing similarities in its dimensional geometry with that of asbestos and synthetic vitreous fibres. The aim of this review is two-fold: (a) to present findings from past literature, on the physicochemical property and pathogenicity bioassay testing of HARN (b) to identify some of the challenges and future research steps crucial before the HARN model can be accepted as a predictive model. By presenting what has been done, we are able to identify scientific challenges and research directions that are needed for the HARN model to gain public acceptance. Our recommendations for future research includes the need to: (a) accurately link physicochemical data with corresponding pathogenicity assay data, through the use of suitable reference standards and standardised protocols, (b) develop better tools/techniques for physicochemical characterisation, (c) to develop better ways of monitoring HARN in the workplace, (d) to reliably measure dose exposure levels, in order to support future epidemiological
A hypothetical model for predicting the toxicity of high aspect ratio nanoparticles (HARN)
International Nuclear Information System (INIS)
Tran, C. L.; Tantra, R.; Donaldson, K.; Stone, V.; Hankin, S. M.; Ross, B.; Aitken, R. J.; Jones, A. D.
2011-01-01
The ability to predict nanoparticle (dimensional structures which are less than 100 nm in size) toxicity through the use of a suitable model is an important goal if nanoparticles are to be regulated in terms of exposures and toxicological effects. Recently, a model to predict toxicity of nanoparticles with high aspect ratio has been put forward by a consortium of scientists. The High aspect ratio nanoparticles (HARN) model is a platform that relates the physical dimensions of HARN (specifically length and diameter ratio) and biopersistence to their toxicity in biological environments. Potentially, this model is of great public health and economic importance, as it can be used as a tool to not only predict toxicological activity but can be used to classify the toxicity of various fibrous nanoparticles, without the need to carry out time-consuming and expensive toxicology studies. However, this model of toxicity is currently hypothetical in nature and is based solely on drawing similarities in its dimensional geometry with that of asbestos and synthetic vitreous fibres. The aim of this review is two-fold: (a) to present findings from past literature, on the physicochemical property and pathogenicity bioassay testing of HARN (b) to identify some of the challenges and future research steps crucial before the HARN model can be accepted as a predictive model. By presenting what has been done, we are able to identify scientific challenges and research directions that are needed for the HARN model to gain public acceptance. Our recommendations for future research includes the need to: (a) accurately link physicochemical data with corresponding pathogenicity assay data, through the use of suitable reference standards and standardised protocols, (b) develop better tools/techniques for physicochemical characterisation, (c) to develop better ways of monitoring HARN in the workplace, (d) to reliably measure dose exposure levels, in order to support future epidemiological
Paukatong, K V; Kunawasen, S
2001-01-01
Nham is a traditional Thai fermented pork sausage. The major ingredients of Nham are ground pork meat and shredded pork rind. Nham has been reported to be contaminated with Salmonella spp., Staphylococcus aureus, and Listeria monocytogenes. Therefore, it is a potential cause of foodborne diseases for consumers. A Hazard Analysis and Critical Control Points (HACCP) generic model has been developed for the Nham process. Nham processing plants were observed and a generic flow diagram of Nham processes was constructed. Hazard analysis was then conducted. Other than microbial hazards, the pathogens previously found in Nham, sodium nitrite and metal were identified as chemical and physical hazards in this product, respectively. Four steps in the Nham process have been identified as critical control points. These steps are the weighing of the nitrite compound, stuffing, fermentation, and labeling. The chemical hazard of nitrite must be controlled during the weighing step. The critical limit of nitrite levels in the Nham mixture has been set at 100-200 ppm. This level is high enough to control Clostridium botulinum but does not cause chemical hazards to the consumer. The physical hazard from metal clips could be prevented by visual inspection of every Nham product during stuffing. The microbiological hazard in Nham could be reduced in the fermentation process. The critical limit of the pH of Nham was set at lower than 4.6. Since this product is not cooked during processing, finally, educating the consumer, by providing information on the label such as "safe if cooked before consumption", could be an alternative way to prevent the microbiological hazards of this product.
Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.
2016-11-01
Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.
Zolfaghari, Mohammad R.
2009-07-01
Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.
A Reduced Model for Salt-Finger Convection in the Small Diffusivity Ratio Limit
Directory of Open Access Journals (Sweden)
Jin-Han Xie
2017-01-01
Full Text Available A simple model of nonlinear salt-ﬁnger convection in two dimensions is derived and studied. The model is valid in the limit of a small solute to heat diffusivity ratio and a large density ratio, which is relevant to both oceanographic and astrophysical applications. Two limits distinguished by the magnitude of the Schmidt number are found. For order one Schmidt numbers, appropriate for astrophysical applications, a modiﬁed Rayleigh–Bénard system with large-scale damping due to a stabilizing temperature is obtained. For large Schmidt numbers, appropriate for the oceanic setting, the model combines a prognostic equation for the solute ﬁeld and a diagnostic equation for inertia-free momentum dynamics. Two distinct saturation regimes are identiﬁed for the second model: the weakly driven regime is characterized by a large-scale ﬂow associated with a balance between advection and linear instability, while the strongly-driven regime produces multiscale structures, resulting in a balance between energy input through linear instability and energy transfer between scales. For both regimes, we analytically predict and numerically conﬁrm the dependence of the kinetic energy and salinity ﬂuxes on the ratio between solutal and thermal Rayleigh numbers. The spectra and probability density functions are also computed.
Directory of Open Access Journals (Sweden)
Durand Eduard
2016-01-01
Full Text Available Along the river Loire, in order to have a homogenous method to do specific risk assessment studies, a new model named CARDigues (for Levee Breach Hazard Calculation was developed in a partnership with DREAL Centre-Val de Loire (owner of levees, Cerema and Irstea. This model enables to approach the probability of failure on every levee sections and to integrate and cross different “stability” parameters such topography and included structures, geology and material geotechnical characteristics, hydraulic loads… and observations of visual inspections or instrumentation results considered as disorders (seepage, burrowing animals, vegetation, pipes, etc.. This model and integrated tool CARDigues enables to check for each levee section, the probability of appearance and rupture of five breaching scenarios initiated by: overflowing, internal erosion, slope instability, external erosion and uplift. It has been recently updated and has been applied on several levee systems by different contractors. The article presents the CARDigues model principles and its recent developments (version V28.00 with examples on river Loire and how it is currently used for a relevant and global levee system diagnosis and assessment. Levee reinforcement or improvement management is also a perspective of applications for this model CARDigues.
Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-05-01
Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.
Parton distributions and EMC ratios of the 6Li nucleus in the constituent quark exchange model
Modarres, M.; Hadian, A.
2017-10-01
While the constituent quark model (CQM), in which the quarks are assumed to be the complex objects, is used to calculate the parton distribution functions of the iso-scalar lithium-6 (6Li) nucleus, the u-d constituent quark distribution functions of the 6Li nucleus are evaluated from the valence quark exchange formalism (VQEF) for the A = 6 iso-scalar system. After computing the valence quark, sea quark, and gluon distribution functions in the constituent quark exchange model (CQEM, i.e., CQM +VQEF), the nucleus structure function is calculated for the 6Li nucleus at the leading order (LO) and the next-to-leading-order (NLO) levels to extract the European muon collaboration (EMC) ratio, at different hard scales, using the standard Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGALP) evolution equations. The outcomes are compared with those of our previous works and the available NMC experimental data, and various physical points are discussed. It is observed that the present EMC ratios are considerably improved compared with those of our previous works, in which only the valence quark distributions were considered to calculate the EMC ratio, and are closer to the NMC data. Finally, it is concluded that at a given appropriate hard scale, the LO approximation may be enough for calculating the nucleus EMC ratio.
Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne
2014-01-01
The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.
Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model
Directory of Open Access Journals (Sweden)
Ge-Jin Chu
2014-01-01
Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
Doubly stochastic models for volcanic hazard assessment at Campi Flegrei caldera
Bevilacqua, Andrea
2016-01-01
This study provides innovative mathematical models for assessing the eruption probability and associated volcanic hazards, and applies them to the Campi Flegrei caldera in Italy. Throughout the book, significant attention is devoted to quantifying the sources of uncertainty affecting the forecast estimates. The Campi Flegrei caldera is certainly one of the world’s highest-risk volcanoes, with more than 70 eruptions over the last 15,000 years, prevalently explosive ones of varying magnitude, intensity and vent location. In the second half of the twentieth century the volcano apparently once again entered a phase of unrest that continues to the present. Hundreds of thousands of people live inside the caldera and over a million more in the nearby city of Naples, making a future eruption of Campi Flegrei an event with potentially catastrophic consequences at the national and European levels.
Risk assessment framework of fate and transport models applied to hazardous waste sites
International Nuclear Information System (INIS)
Hwang, S.T.
1993-06-01
Risk assessment is an increasingly important part of the decision-making process in the cleanup of hazardous waste sites. Despite guidelines from regulatory agencies and considerable research efforts to reduce uncertainties in risk assessments, there are still many issues unanswered. This paper presents new research results pertaining to fate and transport models, which will be useful in estimating exposure concentrations and will help reduce uncertainties in risk assessment. These developments include an approach for (1) estimating the degree of emissions and concentration levels of volatile pollutants during the use of contaminated water, (2) absorption of organic chemicals in the soil matrix through the skin, and (3) steady state, near-field, contaminant concentrations in the aquifer within a waste boundary
A set of integrated environmental transport and diffusion models for calculating hazardous releases
International Nuclear Information System (INIS)
Pepper, D.W.
1996-01-01
A set of numerical transport and dispersion models is incorporated within a graphical interface shell to predict hazardous material released into the environment. The visual shell (EnviroView) consists of an object-oriented knowledge base, which is used for inventory control, site mapping and orientation, and monitoring of materials. Graphical displays of detailed sites, building locations, floor plans, and three-dimensional views within a room are available to the user using a point and click interface. In the event of a release to the environment, the user can choose from a selection of analytical, finite element, finite volume, and boundary element methods, which calculate atmospheric transport, groundwater transport, and dispersion within a building interior. The program runs on 486 personal computers under WINDOWS
Coupling Radar Rainfall Estimation and Hydrological Modelling For Flash-flood Hazard Mitigation
Borga, M.; Creutin, J. D.
Flood risk mitigation is accomplished through managing either or both the hazard and vulnerability. Flood hazard may be reduced through structural measures which alter the frequency of flood levels in the area. The vulnerability of a community to flood loss can be mitigated through changing or regulating land use and through flood warning and effective emergency response. When dealing with flash-flood hazard, it is gener- ally accepted that the most effective way (and in many instances the only affordable in a sustainable perspective) to mitigate the risk is by reducing the vulnerability of the involved communities, in particular by implementing flood warning systems and community self-help programs. However, both the inherent characteristics of the at- mospheric and hydrologic processes involved in flash-flooding and the changing soci- etal needs provide a tremendous challenge to traditional flood forecasting and warning concepts. In fact, the targets of these systems are traditionally localised like urbanised sectors or hydraulic structures. Given the small spatial scale that characterises flash floods and the development of dispersed urbanisation, transportation, green tourism and water sports, human lives and property are exposed to flash flood risk in a scat- tered manner. This must be taken into consideration in flash flood warning strategies and the investigated region should be considered as a whole and every section of the drainage network as a potential target for hydrological warnings. Radar technology offers the potential to provide information describing rain intensities almost contin- uously in time and space. Recent research results indicate that coupling radar infor- mation to distributed hydrologic modelling can provide hydrologic forecasts at all potentially flooded points of a region. Nevertheless, very few flood warning services use radar data more than on a qualitative basis. After a short review of current under- standing in this area, two
A comparative analysis of hazard models for predicting debris flows in Madison County, VA
Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.
2001-01-01
During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).
Kemperman, A.D.A.M.; Borgers, A.W.J.; Timmermans, H.J.P.
2002-01-01
In this study we introduce a semi parametric hazard-based duration model to predict the timing and sequence of theme park visitors' activity choice behavior. The model is estimated on the basis of observations of consumer choices in various hypothetical theme parks. These parks are constructed by
Toe, David; Mentani, Alessio; Govoni, Laura; Bourrier, Franck; Gottardi, Guido; Lambert, Stéphane
2018-04-01
The paper presents a new approach to assess the effecctiveness of rockfall protection barriers, accounting for the wide variety of impact conditions observed on natural sites. This approach makes use of meta-models, considering a widely used rockfall barrier type and was developed from on FE simulation results. Six input parameters relevant to the block impact conditions have been considered. Two meta-models were developed concerning the barrier capability either of stopping the block or in reducing its kinetic energy. The outcome of the parameters range on the meta-model accuracy has been also investigated. The results of the study reveal that the meta-models are effective in reproducing with accuracy the response of the barrier to any impact conditions, providing a formidable tool to support the design of these structures. Furthermore, allowing to accommodate the effects of the impact conditions on the prediction of the block-barrier interaction, the approach can be successfully used in combination with rockfall trajectory simulation tools to improve rockfall quantitative hazard assessment and optimise rockfall mitigation strategies.
A 3-dimensional in vitro model of epithelioid granulomas induced by high aspect ratio nanomaterials
Directory of Open Access Journals (Sweden)
Hurt Robert H
2011-05-01
Full Text Available Abstract Background The most common causes of granulomatous inflammation are persistent pathogens and poorly-degradable irritating materials. A characteristic pathological reaction to intratracheal instillation, pharyngeal aspiration, or inhalation of carbon nanotubes is formation of epithelioid granulomas accompanied by interstitial fibrosis in the lungs. In the mesothelium, a similar response is induced by high aspect ratio nanomaterials, including asbestos fibers, following intraperitoneal injection. This asbestos-like behaviour of some engineered nanomaterials is a concern for their potential adverse health effects in the lungs and mesothelium. We hypothesize that high aspect ratio nanomaterials will induce epithelioid granulomas in nonadherent macrophages in 3D cultures. Results Carbon black particles (Printex 90 and crocidolite asbestos fibers were used as well-characterized reference materials and compared with three commercial samples of multiwalled carbon nanotubes (MWCNTs. Doses were identified in 2D and 3D cultures in order to minimize acute toxicity and to reflect realistic occupational exposures in humans and in previous inhalation studies in rodents. Under serum-free conditions, exposure of nonadherent primary murine bone marrow-derived macrophages to 0.5 μg/ml (0.38 μg/cm2 of crocidolite asbestos fibers or MWCNTs, but not carbon black, induced macrophage differentiation into epithelioid cells and formation of stable aggregates with the characteristic morphology of granulomas. Formation of multinucleated giant cells was also induced by asbestos fibers or MWCNTs in this 3D in vitro model. After 7-14 days, macrophages exposed to high aspect ratio nanomaterials co-expressed proinflammatory (M1 as well as profibrotic (M2 phenotypic markers. Conclusions Induction of epithelioid granulomas appears to correlate with high aspect ratio and complex 3D structure of carbon nanotubes, not with their iron content or surface area. This model
Butler, David R.; Walsh, Stephen J.; Brown, Daniel G.
1991-01-01
Methods are described for using Landsat Thematic Mapper digital data and digital elevation models for the display of natural hazard sites in a mountainous region of northwestern Montana, USA. Hazard zones can be easily identified on the three-dimensional images. Proximity of facilities such as highways and building locations to hazard sites can also be easily displayed. A temporal sequence of Landsat TM (or similar) satellite data sets could also be used to display landscape changes associated with dynamic natural hazard processes.
Modeling speech intelligibility based on the signal-to-noise envelope power ratio
DEFF Research Database (Denmark)
Jørgensen, Søren
of modulation frequency selectivity in the auditory processing of sound with a decision metric for intelligibility that is based on the signal-to-noise envelope power ratio (SNRenv). The proposed speech-based envelope power spectrum model (sEPSM) is demonstrated to account for the effects of stationary...... through three commercially available mobile phones. The model successfully accounts for the performance across the phones in conditions with a stationary speech-shaped background noise, whereas deviations were observed in conditions with “Traffic” and “Pub” noise. Overall, the results of this thesis...
Chen, Wei; Pourghasemi, Hamid Reza; Panahi, Mahdi; Kornejady, Aiding; Wang, Jiale; Xie, Xiaoshen; Cao, Shubo
2017-11-01
The spatial prediction of landslide susceptibility is an important prerequisite for the analysis of landslide hazards and risks in any area. This research uses three data mining techniques, such as an adaptive neuro-fuzzy inference system combined with frequency ratio (ANFIS-FR), a generalized additive model (GAM), and a support vector machine (SVM), for landslide susceptibility mapping in Hanyuan County, China. In the first step, in accordance with a review of the previous literature, twelve conditioning factors, including slope aspect, altitude, slope angle, topographic wetness index (TWI), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, land use, normalized difference vegetation index (NDVI), and lithology, were selected. In the second step, a collinearity test and correlation analysis between the conditioning factors and landslides were applied. In the third step, we used three advanced methods, namely, ANFIS-FR, GAM, and SVM, for landslide susceptibility modeling. Subsequently, the results of their accuracy were validated using a receiver operating characteristic curve. The results showed that all three models have good prediction capabilities, while the SVM model has the highest prediction rate of 0.875, followed by the ANFIS-FR and GAM models with prediction rates of 0.851 and 0.846, respectively. Thus, the landslide susceptibility maps produced in the study area can be applied for management of hazards and risks in landslide-prone Hanyuan County.
A survey of basic reproductive ratios in vector-borne disease transmission modeling
Soewono, E.; Aldila, D.
2015-03-01
Vector-borne diseases are commonly known in tropical and subtropical countries. These diseases have contributed to more than 10% of world infectious disease cases. Among the vectors responsible for transmitting the diseases are mosquitoes, ticks, fleas, flies, bugs and worms. Several of the diseases are known to contribute to the increasing threat to human health such as malaria, dengue, filariasis, chikungunya, west nile fever, yellow fever, encephalistis, and anthrax. It is necessary to understand the real process of infection, factors which contribute to the complication of the transmission in order to come up with a good and sound mathematical model. Although it is not easy to simulate the real transmission process of the infection, we could say that almost all models have been developed from the already long known Host-Vector model. It constitutes the main transmission processes i.e. birth, death, infection and recovery. From this simple model, the basic concepts of Disease Free and Endemic Equilibria and Basic Reproductive Ratio can be well explained and understood. Theoretical, modeling, control and treatment aspects of disease transmission problems have then been developed for various related diseases. General construction as well as specific forms of basic reproductive ratios for vector-borne diseases are discusses here.
International Nuclear Information System (INIS)
Boissonnade, A; Hossain, Q; Kimball, J
2000-01-01
Since the mid-l980's, assessment of the wind and tornado risks at the Department of Energy (DOE) high and moderate hazard facilities has been based on the straight wind/tornado hazard curves given in UCRL-53526 (Coats, 1985). These curves were developed using a methodology that utilized a model, developed by McDonald, for severe winds at sub-tornado wind speeds and a separate model, developed by Fujita, for tornado wind speeds. For DOE sites not covered in UCRL-53526, wind and tornado hazard assessments are based on the criteria outlined in DOE-STD-1023-95 (DOE, 1996), utilizing the methodology in UCRL-53526; Subsequent to the publication of UCRL53526, in a study sponsored by the Nuclear Regulatory Commission (NRC), the Pacific Northwest Laboratory developed tornado wind hazard curves for the contiguous United States, NUREG/CR-4461 (Ramsdell, 1986). Because of the different modeling assumptions and underlying data used to develop the tornado wind information, the wind speeds at specified exceedance levels, at a given location, based on the methodology in UCRL-53526, are different than those based on the methodology in NUREG/CR-4461. In 1997, Lawrence Livermore National Laboratory (LLNL) was funded by the DOE to review the current methodologies for characterizing tornado wind hazards and to develop a state-of-the-art wind/tornado characterization methodology based on probabilistic hazard assessment techniques and current historical wind data. This report describes the process of developing the methodology and the database of relevant tornado information needed to implement the methodology. It also presents the tornado wind hazard curves obtained from the application of the method to DOE sites throughout the contiguous United States
He, P L; Zhao, C X; Dong, Q Y; Hao, S B; Xu, P; Zhang, J; Li, J G
2018-01-20
Objective: To evaluate the occupational health risk of decorative coating manufacturing enterprises and to explore the applicability of occupational hazard risk index model in the health risk assessment, so as to provide basis for the health management of enterprises. Methods: A decorative coating manufacturing enterprise in Hebei Province was chosen as research object, following the types of occupational hazards and contact patterns, the occupational hazard risk index model was used to evaluate occupational health risk factors of occupational hazards in the key positions of the decorative coating manufacturing enterprise, and measured with workplace test results and occupational health examination. Results: The positions of oily painters, water-borne painters, filling workers and packers who contacted noise were moderate harm. And positions of color workers who contacted chromic acid salts, oily painters who contacted butyl acetate were mild harm. Other positions were harmless. The abnormal rate of contacting noise in physical examination results was 6.25%, and the abnormality was not checked by other risk factors. Conclusion: The occupational hazard risk index model can be used in the occupational health risk assessment of decorative coating manufacturing enterprises, and noise was the key harzard among occupational harzards in this enterprise.
Directory of Open Access Journals (Sweden)
Milevski Ivica
2013-01-01
Full Text Available In this paper, one approach of Geographic Information System (GIS and Remote Sensing (RS assessment of potential natural hazard areas (excess erosion, landslides, flash floods and fires is presented. For that purpose Pehchevo Municipality in the easternmost part of the Republic of Macedonia is selected as a case study area because of high local impact of natural hazards on the environment, social-demographic situation and local economy. First of all, most relevant static factors for each type of natural hazard are selected (topography, land cover, anthropogenic objects and infrastructure. With GIS and satellite imagery, multi-layer calculation is performed based on available traditional equations, clustering or discreditation procedures. In such way suitable relatively “static” natural hazard maps (models are produced. Then, dynamic (mostly climate related factors are included in previous models resulting in appropriate scenarios correlated with different amounts of precipitation, temperature, wind direction etc. Finally, GIS based scenarios are evaluated and tested with field check or very fine resolution Google Earth imagery showing good accuracy. Further development of such GIS models in connection with automatic remote meteorological stations and dynamic satellite imagery (like MODIS will provide on-time warning for coming natural hazard avoiding potential damages or even causalities.
Modeling the Plasma Flow in the Inner Heliosheath with a Spatially Varying Compression Ratio
Energy Technology Data Exchange (ETDEWEB)
Nicolaou, G. [Swedish Institute of Space Physics, Kiruna (Sweden); Livadiotis, G. [Southwest Research Institute, San Antonio, Texas (United States)
2017-03-20
We examine a semi-analytical non-magnetic model of the termination shock location previously developed by Exarhos and Moussas. In their study, the plasma flow beyond the shock is considered incompressible and irrotational, thus the flow potential is analytically derived from the Laplace equation. Here we examine the characteristics of the downstream flow in the heliosheath in order to resolve several inconsistencies existing in the Exarhos and Moussas model. In particular, the model is modified in order to be consistent with the Rankine–Hugoniot jump conditions and the geometry of the termination shock. It is shown that a shock compression ratio varying along the latitude can lead to physically correct results. We describe the new model and present several simplified examples for a nearly spherical, strong termination shock. Under those simplifications, the upstream plasma is nearly adiabatic for large (∼100 AU) heliosheath thickness.
Flood Hazard Mapping using Hydraulic Model and GIS: A Case Study in Mandalay City, Myanmar
Directory of Open Access Journals (Sweden)
Kyu Kyu Sein
2016-01-01
Full Text Available This paper presents the use of flood frequency analysis integrating with 1D Hydraulic model (HECRAS and Geographic Information System (GIS to prepare flood hazard maps of different return periods in Ayeyarwady River at Mandalay City in Myanmar. Gumbel’s distribution was used to calculate the flood peak of different return periods, namely, 10 years, 20 years, 50 years, and 100 years. The flood peak from frequency analysis were input into HEC-RAS model to find the corresponding flood level and extents in the study area. The model results were used in integrating with ArcGIS to generate flood plain maps. Flood depths and extents have been identified through flood plain maps. Analysis of 100 years return period flood plain map indicated that 157.88 km2 with the percentage of 17.54% is likely to be inundated. The predicted flood depth ranges varies from greater than 0 to 24 m in the flood plains and on the river. The range between 3 to 5 m were identified in the urban area of Chanayetharzan, Patheingyi, and Amarapua Townships. The highest inundated area was 85 km2 in the Amarapura Township.
Identifying model pollutants to investigate biodegradation of hazardous XOCs in WWTPs
Energy Technology Data Exchange (ETDEWEB)
Press-Kristensen, Kaare; Ledin, Anna; Schmidt, Jens Ejbye; Henze, Mogens [Department of Environment and Resources, Technical University of Denmark Building 115, 2800 Lyngby (Denmark)
2007-02-01
Xenobiotic organic compounds (XOCs) in wastewater treatment plant (WWTP) effluents might cause toxic effects in ecosystems. Several investigations have emphasized biodegradation as an important removal mechanism to reduce pollution with XOCs from WWTP effluents. The aim of the study was to design a screening tool to identify and select hazardous model pollutants for the further investigation of biodegradation in WWTPs. The screening tool consists of three criteria: The XOC is present in WWTP effluents, the XOC constitutes an intolerable risk in drinking water or the environment, and the XOC is expected to be biodegradable in WWTPs. The screening tool was tested on bisphenol A (BPA), carbamazepine (CBZ), di(2ethylhexyl)-phthalate (DEHP), 17{beta}-estradiol (E2), estrone (E1), 17{alpha}-ethinyloetradiol (EE2), ibuprofen, naproxen, nonylphenol (NP), and octylphenol (OP). BPA, DEHP, E2, E1, EE2, and NP passed all criteria in the screening tool and were selected as model pollutants. OP did not pass the filter and was rejected as model pollutant. CBZ, ibuprofen, and naproxen were not finally evaluated due to insufficient data. (author)
Valentin, Jan B; Andreetta, Christian; Boomsma, Wouter; Bottaro, Sandro; Ferkinghoff-Borg, Jesper; Frellsen, Jes; Mardia, Kanti V; Tian, Pengfei; Hamelryck, Thomas
2014-02-01
We propose a method to formulate probabilistic models of protein structure in atomic detail, for a given amino acid sequence, based on Bayesian principles, while retaining a close link to physics. We start from two previously developed probabilistic models of protein structure on a local length scale, which concern the dihedral angles in main chain and side chains, respectively. Conceptually, this constitutes a probabilistic and continuous alternative to the use of discrete fragment and rotamer libraries. The local model is combined with a nonlocal model that involves a small number of energy terms according to a physical force field, and some information on the overall secondary structure content. In this initial study we focus on the formulation of the joint model and the evaluation of the use of an energy vector as a descriptor of a protein's nonlocal structure; hence, we derive the parameters of the nonlocal model from the native structure without loss of generality. The local and nonlocal models are combined using the reference ratio method, which is a well-justified probabilistic construction. For evaluation, we use the resulting joint models to predict the structure of four proteins. The results indicate that the proposed method and the probabilistic models show considerable promise for probabilistic protein structure prediction and related applications. Copyright © 2013 Wiley Periodicals, Inc.
Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows
Liang, Hong; Xu, Jiangrong; Chen, Jiangxing; Wang, Huili; Chai, Zhenhua; Shi, Baochang
2018-03-01
In this paper, we present a simple and accurate lattice Boltzmann (LB) model for immiscible two-phase flows, which is able to deal with large density contrasts. This model utilizes two LB equations, one of which is used to solve the conservative Allen-Cahn equation, and the other is adopted to solve the incompressible Navier-Stokes equations. A forcing distribution function is elaborately designed in the LB equation for the Navier-Stokes equations, which make it much simpler than the existing LB models. In addition, the proposed model can achieve superior numerical accuracy compared with previous Allen-Cahn type of LB models. Several benchmark two-phase problems, including static droplet, layered Poiseuille flow, and spinodal decomposition are simulated to validate the present LB model. It is found that the present model can achieve relatively small spurious velocity in the LB community, and the obtained numerical results also show good agreement with the analytical solutions or some available results. Lastly, we use the present model to investigate the droplet impact on a thin liquid film with a large density ratio of 1000 and the Reynolds number ranging from 20 to 500. The fascinating phenomena of droplet splashing is successfully reproduced by the present model and the numerically predicted spreading radius exhibits to obey the power law reported in the literature.
Non-Volcanic release of CO2 in Italy: quantification, conceptual models and gas hazard
Chiodini, G.; Cardellini, C.; Caliro, S.; Avino, R.
2011-12-01
Central and South Italy are characterized by the presence of many reservoirs naturally recharged by CO2 of deep provenance. In the western sector, the reservoirs feed hundreds of gas emissions at the surface. Many studies in the last years were devoted to (i) elaborating a map of CO2 Earth degassing of the region; (ii) to asses the gas hazard; (iii) to develop methods suitable for the measurement of the gas fluxes from different types of emissions; (iv) to elaborate the conceptual model of Earth degassing and its relation with the seismic activity of the region and (v) to develop physical numerical models of CO2 air dispersion. The main results obtained are: 1) A general, regional map of CO2 Earth degassing in Central Italy has been elaborated. The total flux of CO2 in the area has been estimated in ~ 10 Mt/a which are released to the atmosphere trough numerous dangerous gas emissions or by degassing spring waters (~ 10 % of the CO2 globally estimated to be released by the Earth trough volcanic activity). 2) An on line, open access, georeferenced database of the main CO2 emissions (~ 250) was settled up (http://googas.ov.ingv.it). CO2 flux > 100 t/d characterise 14% of the degassing sites while CO2 fluxes from 100 t/d to 10 t/d have been estimated for about 35% of the gas emissions. 3) The sites of the gas emissions are not suitable for life: the gas causes many accidents to animals and people. In order to mitigate the gas hazard a specific model of CO2 air dispersion has been developed and applied to the main degassing sites. A relevant application regarded Mefite d'Ansanto, southern Apennines, which is the largest natural emission of low temperature CO2 rich gases, from non-volcanic environment, ever measured in the Earth (˜2000 t/d). Under low wind conditions, the gas flows along a narrow natural channel producing a persistent gas river which has killed over a period of time many people and animals. The application of the physical numerical model allowed us to
Conceptual model of volcanism and volcanic hazards of the region of Ararat valley, Armenia
Meliksetian, Khachatur; Connor, Charles; Savov, Ivan; Connor, Laura; Navasardyan, Gevorg; Manucharyan, Davit; Ghukasyan, Yura; Gevorgyan, Hripsime
2015-04-01
Armenia and the adjacent volcanically active regions in Iran, Turkey and Georgia are located in the collision zone between the Arabian and Eurasian lithospheric plates. The majority of studies of regional collision related volcanism use the model proposed by Keskin, (2003) where volcanism is driven by Neo-Tethyan slab break-off. In Armenia, >500 Quaternary-Holocene volcanoes from the Gegham, Vardenis and Syunik volcanic fields are hosted within pull-apart structures formed by active faults and their segments (Karakhanyan et al., 2002), while tectonic position of the large in volume basalt-dacite Aragats volcano and periphery volcanic plateaus is different and its position away from major fault lines necessitates more complex volcano-tectonic setup. Our detailed volcanological, petrological and geochemical studies provide insight into the nature of such volcanic activity in the region of Ararat Valley. Most magmas, such as those erupted in Armenia are volatile-poor and erupt fairly hot. Here we report newly discovered tephra sequences in Ararat valley, that were erupted from historically active Ararat stratovolcano and provide evidence for explosive eruption of young, mid K2O calc-alkaline and volatile-rich (>4.6 wt% H2O; amph-bearing) magmas. Such young eruptions, in addition to the ignimbrite and lava flow hazards from Gegham and Aragats, present a threat to the >1.4 million people (~ ½ of the population of Armenia). We will report numerical simulations of potential volcanic hazards for the region of Ararat valley near Yerevan that will include including tephra fallout, lava flows and opening of new vents. Connor et al. (2012) J. Applied Volcanology 1:3, 1-19; Karakhanian et al. (2002), JVGR, 113, 319-344; Keskin, M. (2003) Geophys. Res. Lett. 30, 24, 8046.
Directory of Open Access Journals (Sweden)
N. Diodato
2004-01-01
Full Text Available Damaging hydrogeomorphological events are defined as one or more simultaneous phenomena (e.g. accelerated erosions, landslides, flash floods and river floods, occurring in a spatially and temporal random way and triggered by rainfall with different intensity and extent. The storm rainfall values are highly dependent on weather condition and relief. However, the impact of rainstorms in Mediterranean mountain environments depend mainly on climatic fluctuations in the short and long term, especially in rainfall quantity. An algorithm for the characterisation of this impact, called Rainfall Hazard Index (RHI, is developed with a less expensive methodology. In RHI modelling, we assume that the river-torrential system has adapted to the natural hydrological regime, and a sudden fluctuation in this regime, especially those exceeding thresholds for an acceptable range of flexibility, may have disastrous consequences for the mountain environment. RHI integrate two rainfall variables based upon storm depth current and historical data, both of a fixed duration, and a one-dimensionless parameter representative of the degree ecosystem flexibility. The approach was applied to a test site in the Benevento river-torrential landscape, Campania (Southern Italy. So, a database including data from 27 events which have occurred during an 77-year period (1926-2002 was compared with Benevento-station RHI(24h, for a qualitative validation. Trends in RHIx for annual maximum storms of duration 1, 3 and 24h were also examined. Little change is observed at the 3- and 24-h duration of a storm, but a significant increase results in hazard of a short and intense storm (RHIx(1h, in agreement with a reduction in return period for extreme rainfall events.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models
Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.
2016-01-01
Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906
Directory of Open Access Journals (Sweden)
Lois A Gelfand
2016-03-01
Full Text Available Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH and fully parametric accelerated failure time (AFT approaches for illustration.Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively under varied data conditions, some including censoring. A simulated data set illustrates the findings.Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome – underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG.Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.
Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco
2016-04-01
The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including
Energy Technology Data Exchange (ETDEWEB)
Mehta, Akansha; Mishra, Amit; Sharma, Manisha; Singh, Satnam; Basu, Soumen, E-mail: soumen.basu@thapar.edu [Thapar University, School of Chemistry and Biochemistry (India)
2016-07-15
In this study microwave assisted technique has been adopted for the synthesis of different weight ratios of TiO{sub 2} dispersed on Santa barbara amorphous-15 (SBA-15) support. Morphological study revealed TiO{sub 2} particles (4–10 nm) uniformly distributed on SBA-15 while increases in SBA-15 content results in higher specific surface area (524–237 m{sup 2}/g). The diffraction intensity of 101 plane of anatase polymorph was seen increasing with increase in TiO{sub 2} ratio. All the photocatalysts were having a mesoporous nature and follow the Langmuir IV isotherm, SBA-15 posses the highest pore volume (0.93 cm{sup 3} g{sup −1}) which consistently decreased with TiO{sub 2} content and was lowest (0.50 cm{sup 3} g{sup −1}) in case of 5 wt% of TiO{sub 2} followed by P25 (0.45 cm{sup 3} g{sup −1}) while pore diameter increased after TiO{sub 2} incorporation due to pore strain. The photocatalytic activity of the nanocomposites were analysed for the photodegradation of alizarin dye and pentachlorophenol under UV light irradiation. The reaction kinetics suggested the highest efficiency (98 % for alizarin and 94 % for PCP) of 5 wt% TiO{sub 2} compared to other photocatalysts, these nanocomposites were reused for several cycles, which is most important for heterogeneous photocatalytic degradation reaction.Graphical abstractThis study demonstrates the synthesis of silica embedded TiO{sub 2} nanocomposites by microwave assisted technique and their catalytic influence on degradation of organic dyes and pollutants. Higher loading of titania (SBA-15/TiO{sub 2}, 1:5) results better catalytic performance than commercial nano TiO{sub 2} (P25).
van den Ende, D. A.; Maier, R. A.; van Neer, P. L. M. J.; van der Zwaag, S.; Randall, C. A.; Groen, W. A.
2013-01-01
In this work, the piezoelectric properties at high electric fields of dielectrophoretically aligned PZT—polymer composites containing high aspect ratio particles (such as short fibers) are presented. Polarization and strain as a function of electric field are evaluated. The properties of the composites are compared to those of PZT-polymer composites with equiaxed particles, continuous PZT fiber-polymer composites, and bulk PZT ceramics. From high-field polarization and strain measurements, the effective field dependent permittivity and piezoelectric charge constant in the poling direction are determined for dielectrophoresis structured PZT-polymer composites, continuous PZT fiber-polymer composites, and bulk PZT ceramics. The changes in dielectric properties of the inclusions and the matrix at high fields influence the dielectric and piezoelectric properties of the composites. It is found that the permittivity and piezoelectric charge constants increase towards a maximum at an applied field of around 2.5-5 kV/mm. The electric field at which the maximum occurs depends on the aspect ratio and degree of alignment of the inclusions. Experimental values of d33 at low and high applied fields are compared to a model describing the composites as a continuous polymer matrix containing PZT particles of various aspect ratios arranged into chains. Thickness mode coupling factors were determined from measured impedance data using fitted equivalent circuit model simulations. The relatively high piezoelectric strain constants, voltage constants, and thickness coupling factors indicate that such aligned short fiber composites could be useful as flexible large area transducers.
Wind-tunnel modelling of the tip-speed ratio influence on the wake evolution
Stein, Victor P.; Kaltenbach, Hans-Jakob
2016-09-01
Wind-tunnel measurements on the near-wake evolution of a three bladed horizontal axis wind turbine model (HAWT) in the scale 1:O(350) operating in uniform flow conditions and within a turbulent boundary layer at different tip speed ratios are presented. Operational conditions are chosen to exclude Reynolds number effects regarding the turbulent boundary layer as well as the rotor performance. Triple-wire anemometry is used to measure all three velocity components in the mid-vertical and mid-horizontal plane, covering the range from the near- to the far-wake region. In order to analyse wake properties systematically, power and thrust coefficients of the turbine were measured additionally. It is confirmed that realistic modelling of the wake evolution is not possible in a low-turbulence uniform approach flow. Profiles of mean velocity and turbulence intensity exhibit large deviations between the low-turbulence uniform flow and the turbulent boundary layer, especially in the far-wake region. For nearly constant thrust coefficients differences in the evolution of the near-wake can be identified for tip speed ratios in the range from 6.5 to 10.5. It is shown that with increasing downstream distances mean velocity profiles become indistinguishable whereas for turbulence statistics a subtle dependency on the tip speed ratio is still noticeable in the far-wake region.
Deriving metabolic engineering strategies from genome-scale modeling with flux ratio constraints.
Yen, Jiun Y; Nazem-Bokaee, Hadi; Freedman, Benjamin G; Athamneh, Ahmad I M; Senger, Ryan S
2013-05-01
Optimized production of bio-based fuels and chemicals from microbial cell factories is a central goal of systems metabolic engineering. To achieve this goal, a new computational method of using flux balance analysis with flux ratios (FBrAtio) was further developed in this research and applied to five case studies to evaluate and design metabolic engineering strategies. The approach was implemented using publicly available genome-scale metabolic flux models. Synthetic pathways were added to these models along with flux ratio constraints by FBrAtio to achieve increased (i) cellulose production from Arabidopsis thaliana; (ii) isobutanol production from Saccharomyces cerevisiae; (iii) acetone production from Synechocystis sp. PCC6803; (iv) H2 production from Escherichia coli MG1655; and (v) isopropanol, butanol, and ethanol (IBE) production from engineered Clostridium acetobutylicum. The FBrAtio approach was applied to each case to simulate a metabolic engineering strategy already implemented experimentally, and flux ratios were continually adjusted to find (i) the end-limit of increased production using the existing strategy, (ii) new potential strategies to increase production, and (iii) the impact of these metabolic engineering strategies on product yield and culture growth. The FBrAtio approach has the potential to design "fine-tuned" metabolic engineering strategies in silico that can be implemented directly with available genomic tools. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Recovering the observed b/c ratio in a dynamic spiral-armed cosmic ray model
International Nuclear Information System (INIS)
Benyamin, David; Piran, Tsvi; Shaviv, Nir J.; Nakar, Ehud
2014-01-01
We develop a fully three-dimensional numerical code describing the diffusion of cosmic rays (CRs) in the Milky Way. It includes the nuclear spallation chain up to oxygen, and allows the study of various CR properties, such as the CR age, grammage traversed, and the ratio between secondary and primary particles. This code enables us to explore a model in which a large fraction of the CR acceleration takes place in the vicinity of galactic spiral arms that are dynamic. We show that the effect of having dynamic spiral arms is to limit the age of CRs at low energies. This is because at low energies the time since the last spiral arm passage governs the CR age, and not diffusion. Using the model, the observed spectral dependence of the secondary to primary ratio is recovered without requiring any further assumptions such as a galactic wind, re-acceleration or various assumptions on the diffusivity. In particular, we obtain a secondary to primary ratio which increases with energy below about 1 GeV.
Directory of Open Access Journals (Sweden)
Yingjun Jiang
2015-04-01
Full Text Available In order to better understand the mechanical properties of graded crushed rocks (GCRs and to optimize the relevant design, a numerical test method based on the particle flow modeling technique PFC2D is developed for the California bearing ratio (CBR test on GCRs. The effects of different testing conditions and micro-mechanical parameters used in the model on the CBR numerical results have been systematically studied. The reliability of the numerical technique is verified. The numerical results suggest that the influences of the loading rate and Poisson's ratio on the CBR numerical test results are not significant. As such, a loading rate of 1.0–3.0 mm/min, a piston diameter of 5 cm, a specimen height of 15 cm and a specimen diameter of 15 cm are adopted for the CBR numerical test. The numerical results reveal that the CBR values increase with the friction coefficient at the contact and shear modulus of the rocks, while the influence of Poisson's ratio on the CBR values is insignificant. The close agreement between the CBR numerical results and experimental results suggests that the numerical simulation of the CBR values is promising to help assess the mechanical properties of GCRs and to optimize the grading design. Besides, the numerical study can provide useful insights on the mesoscopic mechanism.
Samphutthanon, R.; Tripathi, N. K.; Ninsawat, S.; Duboz, R.
2014-12-01
The main objective of this research was the development of an HFMD hazard zonation (HFMD-HZ) model by applying AHP and Fuzzy Logic AHP methodologies for weighting each spatial factor such as disease incidence, socio-economic and physical factors. The outputs of AHP and FAHP were input into a Geographic Information Systems (GIS) process for spatial analysis. 14 criteria were selected for analysis as important factors: disease incidence over 10 years from 2003 to 2012, population density, road density, land use and physical features. The results showed a consistency ratio (CR) value for these main criteria of 0.075427 for AHP, the CR for FAHP results was 0.092436. As both remained below the threshold of 0.1, the CR value were acceptable. After linking to actual geospatial data (disease incidence 2013) through spatial analysis by GIS for validation, the results of the FAHP approach were found to match more accurately than those of the AHP approach. The zones with the highest hazard of HFMD outbreaks were located in two main areas in central Muang Chiang Mai district including suburbs and Muang Chiang Rai district including the vicinity. The produced hazardous maps may be useful for organizing HFMD protection plans.
Kreus, Markus; Paetsch, Johannes; Grosse, Fabian; Lenhart, Hermann; Peck, Myron; Pohlmann, Thomas
2017-04-01
Ongoing Ocean Acidification (OA) and climate change related trends impact on physical (temperature), chemical (CO2 buffer capacity) and biological (stoichiometric) properties of the marine environment. These threats affect the global ocean but they appear particularly pronounced in marginal and shelf seas. Marine biogeochemical models are often used to investigate the impacts of climate change and changes in OA on the marine system as well as its exchange with the atmosphere. Different studies showed that both the structural composition of the models and the elemental ratios of particulate organic matter in the surface ocean affect the key processes controlling the ocean's efficiency storing atmospheric excess carbon. Recent studies focus on the variability of the elemental ratios of phytoplankton and found that the high plasticity of C:N:P ratios enables the storage of large amounts of carbon by incorporation into carbohydrates and lipids. Our analysis focuses on the North Sea, a temperate European shelf sea, for the period 2000-2014. We performed an ensemble of model runs differing only in phytoplankton stoichiometry, representing combinations of C:P = [132.5, 106, 79.5] and N:P=[20, 16, 12] (i.e., Redfield ratio +/- 25%). We examine systematically the variations in annual averages of net primary production (NPP), net ecosystem production in the upper 30 m (NEP30), export production below 30 m depth (EXP30), and the air-sea flux of CO2 (ASF). Ensemble average fluxes (and standard deviations) resulted in NPP = 15.4 (2.8) mol C m-2 a-1, NEP30 = 5.4 (1.1) mol C m-2 a-1, EXP30 = 8.1 (1.1) mol C m-2 a-1 and ASF = 1.1 (0.5) mol C m-2 a-1. All key parameters exhibit only minor variations along the axis of constant C:N, but correlate positively with increasing C:P and decreasing N:P ratios. Concerning regional differences, lowest variations in local fluxes due to different stoichiometric ratios can be found in the shallow southern and coastal North Sea. Highest
Cai, Tao; Guo, Songtao; Li, Yongzeng; Peng, Di; Zhao, Xiaofeng; Liu, Yingzheng
2018-04-01
The mechanoluminescent (ML) sensor is a newly developed non-invasive technique for stress/strain measurement. However, its application has been mostly restricted to qualitative measurement due to the lack of a well-defined relationship between ML intensity and stress. To achieve accurate stress measurement, an intensity ratio model was proposed in this study to establish a quantitative relationship between the stress condition and its ML intensity in elastic deformation. To verify the proposed model, experiments were carried out on a ML measurement system using resin samples mixed with the sensor material SrAl2O4:Eu2+, Dy3+. The ML intensity ratio was found to be dependent on the applied stress and strain rate, and the relationship acquired from the experimental results agreed well with the proposed model. The current study provided a physical explanation for the relationship between ML intensity and its stress condition. The proposed model was applicable in various SrAl2O4:Eu2+, Dy3+-based ML measurement in elastic deformation, and could provide a useful reference for quantitative stress measurement using the ML sensor in general.
DEFF Research Database (Denmark)
Kook, Junghwan; Jensen, Jakob Søndergaard
2014-01-01
The aim of this paper is to investigate the enhancement of the damping ratio of a structure with embedded microbeam resonators in air-filled internal cavities. In this context, we discuss theoretical aspects in the framework of the effective modal damping ratio (MDR) and derive an approximate...... relation expressing how an increased damping due to the acoustic medium surrounding the microbeam affect the MDR of the macrobeam. We further analyze the effect of including dissipation of the acoustic medium by using finite element (FE) analysis with acoustic-structure interaction (ASI) using a simple...... phenomenological acoustic loss model. An eigenvalue analysis is carried out to demonstrate the improvement of the damping characteristic of the macrobeam with the resonating microbeam in the lossy air and the results are compared to a forced vibration analysis for a macrobeam with one or multiple embedded...
Directory of Open Access Journals (Sweden)
Sohair F Higazi
2013-02-01
Full Text Available Regression analysis depends on several assumptions that have to be satisfied. A major assumption that is never satisfied when variables are from contiguous observations is the independence of error terms. Spatial analysis treated the violation of that assumption by two derived models that put contiguity of observations into consideration. Data used are from Egypt's 2006 latest census, for 93 counties in middle delta seven adjacent Governorates. The dependent variable used is the percent of individuals classified as poor (those who make less than 1$ daily. Predictors are some demographic indicators. Explanatory Spatial Data Analysis (ESDA is performed to examine the existence of spatial clustering and spatial autocorrelation between neighboring counties. The ESDA revealed spatial clusters and spatial correlation between locations. Three statistical models are applied to the data, the Ordinary Least Square regression model (OLS, the Spatial Error Model (SEM and the Spatial Lag Model (SLM.The Likelihood Ratio test and some information criterions are used to compare SLM and SEM to OLS. The SEM model proved to be better than the SLM model. Recommendations are drawn regarding the two spatial models used.
Sun, Jianguo; Feng, Yanqin; Zhao, Hui
2015-01-01
Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.
DEFF Research Database (Denmark)
Nordahl, H; Rod, NH; Frederiksen, BL
2013-01-01
seven Danish cohort studies were linked to registry data on education and incidence of CHD. Mediation by smoking, low physical activity, and body mass index (BMI) on the association between education and CHD were estimated by applying newly proposed methods for mediation based on the additive hazards...... % CI: 12, 22) for women and 37 (95 % CI: 28, 46) for men could be ascribed to the pathway through smoking. Further, 39 (95 % CI: 30, 49) cases for women and 94 (95 % CI: 79, 110) cases for men could be ascribed to the pathway through BMI. The effects of low physical activity were negligible. Using...... contemporary methods, the additive hazards model, for mediation we indicated the absolute numbers of CHD cases prevented when modifying smoking and BMI. This study confirms previous claims based on the Cox proportional hazards model that behavioral risk factors partially mediates the effect of education on CHD...
Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model
Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza
2017-08-01
Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.
Hierarchical Bayesian modelling of mobility metrics for hazard model input calibration
Calder, Eliza; Ogburn, Sarah; Spiller, Elaine; Rutarindwa, Regis; Berger, Jim
2015-04-01
In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.
Taroni, M.; Selva, J.
2017-12-01
In this work we show how we built an ensemble seismic hazard model for the magnitude distribution for the TSUMAPS-NEAM EU project (http://www.tsumaps-neam.eu/). The considered source area includes the whole NEAM region (North East Atlantic, Mediterranean and connected seas). We build our models by using the catalogs (EMEC and ISC), their completeness and the regionalization provided by the project. We developed four alternative implementations of a Bayesian model, considering tapered or truncated Gutenberg-Richter distributions, and fixed or variable b-value. The frequency size distribution is based on the Weichert formulation. This allows for simultaneously assessing all the frequency-size distribution parameters (a-value, b-value, and corner magnitude), using multiple completeness periods for the different magnitudes. With respect to previous studies, we introduce the tapered Pareto distribution (in addition to the classical truncated Pareto), and we build a novel approach to quantify the prior distribution. For each alternative implementation, we set the prior distributions using the global seismic data grouped according to the different types of tectonic setting, and assigned them to the related regions. The estimation is based on the complete (not declustered) local catalog in each region. Using the complete catalog also allows us to consider foreshocks and aftershocks in the seismic rate computation: the Poissonicity of the tsunami events (and similarly the exceedances of the PGA) will be insured by the Le Cam's theorem. This Bayesian approach provides robust estimations also in the zones where few events are available, but also leaves us the possibility to explore the uncertainty associated with the estimation of the magnitude distribution parameters (e.g. with the classical Metropolis-Hastings Monte Carlo method). Finally we merge all the models with their uncertainty to create the ensemble model that represents our knowledge of the seismicity in the
Combining SLBL routine with landslide-generated tsunami model for a quick hazard assessment tool
Franz, Martin; Rudaz, Benjamin; Jaboyedoff, Michel; Podladchikov, Yury
2016-04-01
Regions with steep topography are potentially subject to landslide-induced tsunami, because of the proximity between lakes, rivers, sea shores and potential instabilities. The concentration of the population and infrastructures on the water body shores and downstream valleys could lead to catastrophic consequences. In order to assess comprehensively this phenomenon together with the induced risks, we have developed a tool which allows the construction of the landslide geometry, and which is able to simulate its propagation, the generation and the propagation of the wave and eventually the spread on the shores or the associated downstream flow. The tool is developed in the Matlab© environment, with a graphical user interface (GUI) to select the parameters in a user-friendly manner. The whole process is done in three steps implying different methods. Firstly, the geometry of the sliding mass is constructed using the Sloping Local Base Level (SLBL) concept. Secondly, the propagation of this volume is performed using a model based on viscous flow equations. Finally, the wave generation and its propagation are simulated using the shallow water equations stabilized by the Lax-Friedrichs scheme. The transition between wet and dry bed is performed by the combination of the two latter sets of equations. The intensity map is based on the criterion of flooding in Switzerland provided by the OFEG and results from the multiplication of the velocity and the depth obtained during the simulation. The tool can be used for hazard assessment in the case of well-known landslides, where the SLBL routine can be constrained and checked for realistic construction of the geometrical model. In less-known cases, various failure plane geometries can be automatically built between given range and thus a multi-scenario approach is used. In any case, less-known parameters such as the landslide velocity, its run-out distance, etc. can also be set to vary within given ranges, leading to multi
Hazard Models From Periodic Dike Intrusions at Kı¯lauea Volcano, Hawai`i
Montgomery-Brown, E. K.; Miklius, A.
2016-12-01
The persistence and regular recurrence intervals of dike intrusions in the East Rift Zone (ERZ) of Kı¯lauea Volcano lead to the possibility of constructing a time-dependent intrusion hazard model. Dike intrusions are commonly observed in Kı¯lauea Volcano's ERZ and can occur repeatedly in regions that correlate with seismic segments (sections of rift seismicity with persistent definitive lateral boundaries) proposed by Wright and Klein (USGS PP1806, 2014). Five such ERZ intrusions have occurred since 1983 with inferred locations downrift of the bend in Kı¯lauea's ERZ, with the first (1983) being the start of the ongoing ERZ eruption. The ERZ intrusions occur on one of two segments that are spatially coincident with seismic segments: Makaopuhi (1993 and 2007) and Nāpau (1983, 1997, and 2011). During each intrusion, the amount of inferred dike opening was between 2 and 3 meters. The times between ERZ intrusions for same-segment pairs are all close to 14 years: 14.07 (1983-1997), 14.09 (1997-2011), and 13.95 (1993-2007) years, with the Nāpau segment becoming active about 3.5 years after the Makaopuhi segment in each case. Four additional upper ERZ intrusions are also considered here. Dikes in the upper ERZ have much smaller opening ( 10 cm), and have shorter recurrence intervals of 8 years with more variability. The amount of modeled dike opening during each of these events roughly corresponds to the amount of seaward south flank motion and deep rift opening accumulated in the time between events. Additionally, the recurrence interval of 14 years appears to be unaffected by the magma surge of 2003-2007, suggesting that flank motion, rather than magma supply, could be a controlling factor in the timing and periodicity of intrusions. Flank control over the timing of magma intrusions runs counter to the historical research suggesting that dike intrusions at Kı¯lauea are driven by magma overpressure. This relatively free sliding may have resulted from decreased
Allometric Scaling and Cell Ratios in Multi-Organ in vitro Models of Human Metabolism
International Nuclear Information System (INIS)
Ucciferri, Nadia; Sbrana, Tommaso; Ahluwalia, Arti
2014-01-01
Intelligent in vitro models able to recapitulate the physiological interactions between tissues in the body have enormous potential as they enable detailed studies on specific two-way or higher order tissue communication. These models are the first step toward building an integrated picture of systemic metabolism and signaling in physiological or pathological conditions. However, the rational design of in vitro models of cell–cell or cell–tissue interaction is difficult as quite often cell culture experiments are driven by the device used, rather than by design considerations. Indeed, very little research has been carried out on in vitro models of metabolism connecting different cell or tissue types in a physiologically and metabolically relevant manner. Here, we analyze the physiological relationship between cells, cell metabolism, and exchange in the human body using allometric rules, downscaling them to an organ-on-a-plate device. In particular, in order to establish appropriate cell ratios in the system in a rational manner, two different allometric scaling models (cell number scaling model and metabolic and surface scaling model) are proposed and applied to a two compartment model of hepatic-vascular metabolic cross-talk. The theoretical scaling studies illustrate that the design and hence relevance of multi-organ models is principally determined by experimental constraints. Two experimentally feasible model configurations are then implemented in a multi-compartment organ-on-a-plate device. An analysis of the metabolic response of the two configurations demonstrates that their glucose and lipid balance is quite different, with only one of the two models recapitulating physiological-like homeostasis. In conclusion, not only do cross-talk and physical stimuli play an important role in in vitro models, but the numeric relationship between cells is also crucial to recreate in vitro interactions, which can be extrapolated to the in vivo reality.
Allometric scaling and cell ratios in multi-organ in vitro models of human metabolism
Directory of Open Access Journals (Sweden)
Nadia eUcciferri
2014-12-01
Full Text Available Intelligent in vitro models able to recapitulate the physiological interactions between tissues in the body have enormous potential as they enable detailed studies on specific two-way or higher order tissue communication. These models are the first step towards building an integrated picture of systemic metabolism and signalling in physiological or pathological conditions. However the rational design of in vitro models of cell-cell or cell-tissue interaction is difficult as quite often cell culture experiments are driven by the device used, rather than by design considerations. Indeed very little research has been carried out on in vitro models of metabolism connecting different cell or tissue types in a physiologically and metabolically relevant manner. Here we analyse the physiologic relationship between cells, cell metabolism and exchange in the human body using allometric rules, downscaling them to an organ-on-a plate device. In particular, in order to establish appropriate cell ratios in the system in a rational manner, two different allometric scaling models (Cell Number Scaling Model, CNSM, and Metabolic and Surface Scaling model, MSSM are proposed and applied to a two compartment model of hepatic-vascular metabolic cross-talk. The theoretical scaling studies illustrate that the design and hence relevance of multi-organ models is principally determined by experimental constraints. Two experimentally feasible model configurations are then implemented in a multi-compartment organ-on-a plate device. An analysis of the metabolic response of the two configurations demonstrates that their glucose and lipid balance is quite different, with only one of the two models recapitulating physiological-like homeostasis. In conclusion, not only do cross-talk and physical stimuli play an important role in in vitro models, but the numeric relationship between cells is also crucial to recreate in vitro interactions which can be extrapolated to the in vivo
Allometric Scaling and Cell Ratios in Multi-Organ in vitro Models of Human Metabolism.
Ucciferri, Nadia; Sbrana, Tommaso; Ahluwalia, Arti
2014-01-01
Intelligent in vitro models able to recapitulate the physiological interactions between tissues in the body have enormous potential as they enable detailed studies on specific two-way or higher order tissue communication. These models are the first step toward building an integrated picture of systemic metabolism and signaling in physiological or pathological conditions. However, the rational design of in vitro models of cell-cell or cell-tissue interaction is difficult as quite often cell culture experiments are driven by the device used, rather than by design considerations. Indeed, very little research has been carried out on in vitro models of metabolism connecting different cell or tissue types in a physiologically and metabolically relevant manner. Here, we analyze the physiological relationship between cells, cell metabolism, and exchange in the human body using allometric rules, downscaling them to an organ-on-a-plate device. In particular, in order to establish appropriate cell ratios in the system in a rational manner, two different allometric scaling models (cell number scaling model and metabolic and surface scaling model) are proposed and applied to a two compartment model of hepatic-vascular metabolic cross-talk. The theoretical scaling studies illustrate that the design and hence relevance of multi-organ models is principally determined by experimental constraints. Two experimentally feasible model configurations are then implemented in a multi-compartment organ-on-a-plate device. An analysis of the metabolic response of the two configurations demonstrates that their glucose and lipid balance is quite different, with only one of the two models recapitulating physiological-like homeostasis. In conclusion, not only do cross-talk and physical stimuli play an important role in in vitro models, but the numeric relationship between cells is also crucial to recreate in vitro interactions, which can be extrapolated to the in vivo reality.
Allometric Scaling and Cell Ratios in Multi-Organ in vitro Models of Human Metabolism
Energy Technology Data Exchange (ETDEWEB)
Ucciferri, Nadia [CNR Institute of Clinical Physiology, Pisa (Italy); Interdepartmental Research Center “E. Piaggio”, University of Pisa, Pisa (Italy); Sbrana, Tommaso [Interdepartmental Research Center “E. Piaggio”, University of Pisa, Pisa (Italy); Ahluwalia, Arti, E-mail: arti.ahluwalia@unipi.it [CNR Institute of Clinical Physiology, Pisa (Italy); Interdepartmental Research Center “E. Piaggio”, University of Pisa, Pisa (Italy)
2014-12-17
Intelligent in vitro models able to recapitulate the physiological interactions between tissues in the body have enormous potential as they enable detailed studies on specific two-way or higher order tissue communication. These models are the first step toward building an integrated picture of systemic metabolism and signaling in physiological or pathological conditions. However, the rational design of in vitro models of cell–cell or cell–tissue interaction is difficult as quite often cell culture experiments are driven by the device used, rather than by design considerations. Indeed, very little research has been carried out on in vitro models of metabolism connecting different cell or tissue types in a physiologically and metabolically relevant manner. Here, we analyze the physiological relationship between cells, cell metabolism, and exchange in the human body using allometric rules, downscaling them to an organ-on-a-plate device. In particular, in order to establish appropriate cell ratios in the system in a rational manner, two different allometric scaling models (cell number scaling model and metabolic and surface scaling model) are proposed and applied to a two compartment model of hepatic-vascular metabolic cross-talk. The theoretical scaling studies illustrate that the design and hence relevance of multi-organ models is principally determined by experimental constraints. Two experimentally feasible model configurations are then implemented in a multi-compartment organ-on-a-plate device. An analysis of the metabolic response of the two configurations demonstrates that their glucose and lipid balance is quite different, with only one of the two models recapitulating physiological-like homeostasis. In conclusion, not only do cross-talk and physical stimuli play an important role in in vitro models, but the numeric relationship between cells is also crucial to recreate in vitro interactions, which can be extrapolated to the in vivo reality.
International Nuclear Information System (INIS)
Harper, Bryan; Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan; Tang, Kaizhi; Heredia-Langner, Alejandro; Lins, Roberto; Harper, Stacey
2015-01-01
The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure–toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure–activity relationships
Energy Technology Data Exchange (ETDEWEB)
Harper, Bryan [Oregon State University (United States); Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan [Pacific Northwest National Laboratory (United States); Tang, Kaizhi [Intelligent Automation, Inc. (United States); Heredia-Langner, Alejandro [Pacific Northwest National Laboratory (United States); Lins, Roberto [CPqAM, Oswaldo Cruz Foundation, FIOCRUZ-PE (Brazil); Harper, Stacey, E-mail: stacey.harper@oregonstate.edu [Oregon State University (United States)
2015-06-15
The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure–toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure–activity relationships.
Lu, Y.
2017-12-01
Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of earth's croplands. As such, it plays an important role in soil carbon balance, and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under changing climate, but also for understanding the energy and water cycles for winter wheat dominated regions. A winter wheat growth model has been developed in the Community Land Model 4.5 (CLM4.5), but its responses to irrigation and nitrogen fertilization have not been validated. In this study, I will validate winter wheat growth response to irrigation and nitrogen fertilization at five winter wheat field sites (TXLU, KSMA, NESA, NDMA, and ABLE) in North America, which were originally designed to understand winter wheat response to nitrogen fertilization and water treatments (4 nitrogen levels and 3 irrigation regimes). I also plan to further update the linkages between winter wheat yield and cold hazards. The previous cold damage function only indirectly affects yield through reduction on leaf area index (LAI) and hence photosynthesis, such approach could sometimes produce an unwanted higher yield when the reduced LAI saved more nutrient in the grain fill stage.
Ranking of several ground-motion models for seismic hazard analysis in Iran
International Nuclear Information System (INIS)
Ghasemi, H; Zare, M; Fukushima, Y
2008-01-01
In this study, six attenuation relationships are classified with respect to the ranking scheme proposed by Scherbaum et al (2004 Bull. Seismol. Soc. Am. 94 1–22). First, the strong motions recorded during the 2002 Avaj, 2003 Bam, 2004 Kojour and 2006 Silakhor earthquakes are consistently processed. Then the normalized residual sets are determined for each selected ground-motion model, considering the strong-motion records chosen. The main advantage of these records is that corresponding information about the causative fault plane has been well studied for the selected events. Such information is used to estimate several control parameters which are essential inputs for attenuation relations. The selected relations (Zare et al (1999 Soil Dyn. Earthq. Eng. 18 101–23); Fukushima et al (2003 J. Earthq. Eng. 7 573–98); Sinaeian (2006 PhD Thesis International Institute of Earthquake Engineering and Seismology, Tehran, Iran); Boore and Atkinson (2007 PEER, Report 2007/01); Campbell and Bozorgnia (2007 PEER, Report 2007/02); and Chiou and Youngs (2006 PEER Interim Report for USGS Review)) have been deemed suitable for predicting peak ground-motion amplitudes in the Iranian plateau. Several graphical techniques and goodness-of-fit measures are also applied for statistical distribution analysis of the normalized residual sets. Such analysis reveals ground-motion models, developed using Iranian strong-motion records as the most appropriate ones in the Iranian context. The results of the present study are applicable in seismic hazard assessment projects in Iran
A Risk Assessment Model for Water Resources: releases of dangerous and hazardous substances.
Rebelo, Anabela; Ferra, Isabel; Gonçalves, Isolina; Marques, Albertina M
2014-07-01
Many dangerous and hazardous substances are used, transported and handled daily in diverse situations, from domestic use to industrial processing, and during those operations, spills or other anomalous situations may occur that can lead to contaminant releases followed by contamination of surface water or groundwater through direct or indirect pathways. When dealing with this problem, rapid, technically sound decisions are desirable, and the use of complex methods may not be able to deliver information quickly. This work describes a simple conceptual model established on multi-criteria based analysis involving a strategic appraisal for contamination risk assessment to support local authorities on rapid technical decisions. The model involves a screening for environmental risk sources, focussing on persistent, bioaccumulative and toxic (PBT) substances that may be discharged into water resources. It is a simple tool that can be used to follow-up actual accident scenarios in real time and to support daily activities, such as site-inspections. Copyright © 2014 Elsevier Ltd. All rights reserved.
Scaling model for high-aspect-ratio microballoon direct-drive implosions at short laser wavelengths
International Nuclear Information System (INIS)
Schirmann, D.; Juraszek, D.; Lane, S.M.; Campbell, E.M.
1992-01-01
A scaling model for hot spherical ablative implosions in direct-drive mode is presented. The model results have been compared with experiments from LLE, ILE, and LLNL. Reduction of the neutron yield due to illumination nonuniformities is taken into account by the assumption that the neutron emission is cut off when the gas shock wave reflected off the center meets the incoming pusher, i.e., at a time when the probability of shell breakup is greatly enhanced. The main advantage of this semiempirical scaling model is that it elucidates the principal features of these simple implosions and permits one to estimate very quickly the performance of a high-aspect-ratio direct-drive target illuminated by short-wavelength laser light. (Author)
Directory of Open Access Journals (Sweden)
Fidel Ernesto Castro Morales
2016-03-01
Full Text Available Abstract Objectives: to propose the use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio, including possible confounders. Methods: data from 26 singleton pregnancies with gestational age at birth between 37 and 42 weeks were analyzed. The placentas were collected immediately after delivery and stored under refrigeration until the time of analysis, which occurred within up to 12 hours. Maternal data were collected from medical records. A Bayesian hierarchical model was proposed and Markov chain Monte Carlo simulation methods were used to obtain samples from distribution a posteriori. Results: the model developed showed a reasonable fit, even allowing for the incorporation of variables and a priori information on the parameters used. Conclusions: new variables can be added to the modelfrom the available code, allowing many possibilities for data analysis and indicating the potential for use in research on the subject.
A modified atmospheric non-hydrostatic model on low aspect ratio grids: part II
Directory of Open Access Journals (Sweden)
Wen-Yih Sun
2013-06-01
Full Text Available Sun et al. (2012 proposed a modified non-hydrostatic model (MNH, in which the left-hand side of the continuity equation is multiplied by a parameter δ (4≤δ≤16 in the article to suppress high-frequency acoustic waves. They showed that the MNH allows a longer time step than the original non-hydrostatic model (NH. The MNH is also more accurate and efficient than the horizontal explicit and vertical implicit scheme (HE–VI when the aspect ratio (Δx/Δz is small. In addition to multiplying a parameter δ, here we propose to add a smoothing on the right-hand side of the continuity equation in the MNH to damp shortest sound waves. Linear stability analysis and non-linear model simulations show that the MNH with smoothing (henceforth abbreviated as MNHS can use twice the time interval of the MNH while maintaining the same accuracy. The MNHS is also more accurate and efficient than HE–VI when the aspect ratio is small.
Particle ratios from AGS to RHIC in an interacting hadronic model
International Nuclear Information System (INIS)
Zschiesche, D; Zeeb, G; Paech, K; Schramm, S; Stoecker, H
2004-01-01
The measured particle ratios in central heavy-ion collisions at RHIC-BNL are investigated within a chemical and thermal equilibrium chiral SU(3) σ-ωapproach. The commonly adopted non-interacting gas calculations yield temperatures close to or above the critical temperature for the chiral phase transition, but without taking into account any interactions. In contrast, the chiral SU(3) model predicts temperature and density dependent effective hadron masses and effective chemical potentials in the medium and a transition to a chirally restored phase at high temperatures or chemical potentials. Three different parametrizations of the model, which show different types of phase transition behaviour, are investigated. We show that if a chiral phase transition occured in those collisions, 'freezing' of the relative hadron abundances in the symmetric phase is excluded by the data. Therefore, either very rapid chemical equilibration must occur in the broken phase, or the measured hadron ratios are the outcome of the dynamical symmetry breaking. Furthermore, the extracted chemical freeze-out parameters differ considerably from those obtained in simple non-interacting gas calculations. In particular, the three models yield up to 35 MeV lower temperatures than the free gas approximation. The in-medium masses turn out to differ up to 150 MeV from their vacuum values
[Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].
Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L
2017-03-10
To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.
GPS Modeling and Analysis. Summary of Research: GPS Satellite Axial Ratio Predictions
Axelrad, Penina; Reeh, Lisa
2002-01-01
This report outlines the algorithms developed at the Colorado Center for Astrodynamics Research to model yaw and predict the axial ratio as measured from a ground station. The algorithms are implemented in a collection of Matlab functions and scripts that read certain user input, such as ground station coordinates, the UTC time, and the desired GPS (Global Positioning System) satellites, and compute the above-mentioned parameters. The position information for the GPS satellites is obtained from Yuma almanac files corresponding to the prescribed date. The results are displayed graphically through time histories and azimuth-elevation plots.
Symmetric Anderson impurity model: Magnetic susceptibility, specific heat and Wilson ratio
Zalom, Peter; Pokorný, Vladislav; Janiš, Václav
2018-05-01
We extend the spin-polarized effective-interaction approximation of the parquet renormalization scheme from Refs. [1,2] applied on the symmetric Anderson model by adding the low-temperature asymptotics of the total energy and the specific heat. We calculate numerically the Wilson ratio and determine analytically its asymptotic value in the strong-coupling limit. We demonstrate in this way that the exponentially small Kondo scale from the strong-coupling regime emerges in qualitatively the same way in the spectral function, magnetic susceptibility and the specific heat.
Stuchbery, A. E.; Ryan, C. G.; Bolotin, H. H.; Morrison, I.; Sie, S. H.
1981-07-01
The enhanced transient hyperfine field manifest at the nuclei of swiftly recoiling ions traversing magnetized ferromagnetic materials was utilized to measure the gyromagnetic ratios of the 2 +1, 2 +2 and 4 +1 states in 198Pt by the thin-foil technique. The states of interest were populated by Coulomb excitation using a beam of 220 MeV 58Ni ions. The results obtained were: g(2 +1) = 0.324 ± 0.026; g(2 +2) = 0.34 ± 0.06; g(4 +1) = 0.34 ± 0.06. In addition, these measurements served to discriminate between the otherwise essentially equally probable values previously reported for the E2/M1 ratio of the 2 +2 → 2 +1 transition in 198Pt. We also performed interacting boson approximation (IBA) model-based calculations in the O(6) limit symmetry, with and without inclusion of a small degree of symmetry breaking, and employed the M1 operator in both first- and second-order to obtain M1 selection rules and to calculate gyromagnetic ratios of levels. When O(6) symmetry is broken, there is a predicted departure from constancy of the g-factors which provides a good test of the nuclear wave function. Evaluative comparisons are made between these experimental and predicted g-factors.
Non-chiral, molecular model of negative Poisson ratio in two dimensions
International Nuclear Information System (INIS)
Wojciechowski, K W
2003-01-01
A two-dimensional model of tri-atomic molecules (in which 'atoms' are distributed on vertices of equilateral triangles, and which are further referred to as cyclic trimers) is solved exactly in the static (zero-temperature) limit for the nearest-neighbour site-site interactions. It is shown that the cyclic trimers form a mechanically stable and elastically isotropic non-chiral phase of negative Poisson ratio. The properties of the system are illustrated by three examples of atom-atom interaction potentials: (i) the purely repulsive (n-inverse-power) potential, (ii) the purely attractive (n-power) potential and (iii) the Lennard-Jones potential which shows both the repulsive and the attractive part. The analytic form of the dependence of the Poisson ratio on the interatomic potential is obtained. It is shown that the Poisson ratio depends, in a universal way, only on the trimer anisotropy parameter both (1) in the limit of n → ∞ for cases (i) and (ii), as well as (2) at the zero external pressure for any potential with a doubly differentiable minimum, case (iii) is an example
Preparing a seismic hazard model for Switzerland: the view from PEGASOS Expert Group 3 (EG1c)
Energy Technology Data Exchange (ETDEWEB)
Musson, R. M. W. [British Geological Survey, West Mains Road, Edinburgh, EH9 3LA (United Kingdom); Sellami, S. [Swiss Seismological Service, ETH-Hoenggerberg, Zuerich (Switzerland); Bruestle, W. [Regierungspraesidium Freiburg, Abt. 9: Landesamt fuer Geologie, Rohstoffe und Bergbau, Ref. 98: Landeserdbebendienst, Freiburg im Breisgau (Germany)
2009-05-15
The seismic hazard model used in the PEGASOS project for assessing earth-quake hazard at four NPP sites was a composite of four sub-models, each produced by a team of three experts. In this paper, one of these models is described in detail by the authors. A criticism sometimes levelled at probabilistic seismic hazard studies is that the process by which seismic source zones are arrived at is obscure, subjective and inconsistent. Here, we attempt to recount the stages by which the model evolved, and the decisions made along the way. In particular, a macro-to-micro approach was used, in which three main stages can be described. The first was the characterisation of the overall kinematic model, the 'big picture' of regional seismo-genesis. Secondly, this was refined to a more detailed seismotectonic model. Lastly, this was used as the basis of individual sources, for which parameters can be assessed. Some basic questions had also to be answered about aspects of the approach to modelling to be used: for instance, is spatial smoothing an appropriate tool to apply? Should individual fault sources be modelled in an intra-plate environment? Also, the extent to which alternative modelling decisions should be expressed in a logic tree structure has to be considered. (author)
Preparing a seismic hazard model for Switzerland: the view from PEGASOS Expert Group 3 (EG1c)
International Nuclear Information System (INIS)
Musson, R. M. W.; Sellami, S.; Bruestle, W.
2009-01-01
The seismic hazard model used in the PEGASOS project for assessing earth-quake hazard at four NPP sites was a composite of four sub-models, each produced by a team of three experts. In this paper, one of these models is described in detail by the authors. A criticism sometimes levelled at probabilistic seismic hazard studies is that the process by which seismic source zones are arrived at is obscure, subjective and inconsistent. Here, we attempt to recount the stages by which the model evolved, and the decisions made along the way. In particular, a macro-to-micro approach was used, in which three main stages can be described. The first was the characterisation of the overall kinematic model, the 'big picture' of regional seismo-genesis. Secondly, this was refined to a more detailed seismotectonic model. Lastly, this was used as the basis of individual sources, for which parameters can be assessed. Some basic questions had also to be answered about aspects of the approach to modelling to be used: for instance, is spatial smoothing an appropriate tool to apply? Should individual fault sources be modelled in an intra-plate environment? Also, the extent to which alternative modelling decisions should be expressed in a logic tree structure has to be considered. (author)
Hofstede, ter F.; Wedel, M.
1998-01-01
This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are
Spatiotemporal Patterns in a Ratio-Dependent Food Chain Model with Reaction-Diffusion
Directory of Open Access Journals (Sweden)
Lei Zhang
2014-01-01
Full Text Available Predator-prey models describe biological phenomena of pursuit-evasion interaction. And this interaction exists widely in the world for the necessary energy supplement of species. In this paper, we have investigated a ratio-dependent spatially extended food chain model. Based on the bifurcation analysis (Hopf and Turing, we give the spatial pattern formation via numerical simulation, that is, the evolution process of the system near the coexistence equilibrium point (u2*,v2*,w2*, and find that the model dynamics exhibits complex pattern replication. For fixed parameters, on increasing the control parameter c1, the sequence “holes → holes-stripe mixtures → stripes → spots-stripe mixtures → spots” pattern is observed. And in the case of pure Hopf instability, the model exhibits chaotic wave pattern replication. Furthermore, we consider the pattern formation in the case of which the top predator is extinct, that is, the evolution process of the system near the equilibrium point (u1*,v1*,0, and find that the model dynamics exhibits stripes-spots pattern replication. Our results show that reaction-diffusion model is an appropriate tool for investigating fundamental mechanism of complex spatiotemporal dynamics. It will be useful for studying the dynamic complexity of ecosystems.
DEFF Research Database (Denmark)
Enzenhoefer, R.; Binning, Philip John; Nowak, W.
2015-01-01
Risk is often defined as the product of probability, vulnerability and value. Drinking water supply from groundwater abstraction is often at risk due to multiple hazardous land use activities in the well catchment. Each hazard might or might not introduce contaminants into the subsurface at any......-pathway-receptor concept, mass-discharge-based aggregation of stochastically occuring spill events, accounts for uncertainties in the involved flow and transport models through Monte Carlo simulation, and can address different stakeholder objectives. We illustrate the application of STORM in a numerical test case inspired...
A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM
Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan
2018-03-01
In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.
A modified atmospheric non-hydrostatic model on low aspect ratio grids
Directory of Open Access Journals (Sweden)
Wen-Yih Sun
2012-04-01
Full Text Available It is popular to use a horizontal explicit and a vertical implicit (HE-VI scheme in the compressible non-hydrostatic (NH model. However, when the aspect ratio becomes small, a small time-interval is required in HE-VI, because the Courant-Fredrich-Lewy (CFL criterion is determined by the horizontal grid spacing. Furthermore, simulations from HE-VI can depart from the forward–backward (FB scheme in NH even when the time interval is less than the CFL criterion allowed. Hence, a modified non-hydrostatic (MNH model is proposed, in which the left-hand side of the continuity equation is multiplied by a parameter δ (4≤δ≤16, in this study. When the linearized MNH is solved by FB (can be other schemes, the eigenvalue shows that MNH can suppress the frequency of acoustic waves very effectively but does not have a significant impact on the gravity waves. Hence, MNH enables to use a longer time step than that allowed in the original NH. When the aspect ratio is small, MNH solved by FB can be more accurate and efficient than the NH solved by HE-VI. Therefore, MNH can be very useful to study cloud, Large Eddy Simulation (LES, turbulence, flow over complex terrains, etc., which require fine resolution in both horizontal and vertical directions.
The 4-parameter Compressible Packing Model (CPM) including a critical cavity size ratio
Roquier, Gerard
2017-06-01
The 4-parameter Compressible Packing Model (CPM) has been developed to predict the packing density of mixtures constituted by bidisperse spherical particles. The four parameters are: the wall effect and the loosening effect coefficients, the compaction index and a critical cavity size ratio. The two geometrical interactions have been studied theoretically on the basis of a spherical cell centered on a secondary class bead. For the loosening effect, a critical cavity size ratio, below which a fine particle can be inserted into a small cavity created by touching coarser particles, is introduced. This is the only parameter which requires adaptation to extend the model to other types of particles. The 4-parameter CPM demonstrates its efficiency on frictionless glass beads (300 values), spherical particles numerically simulated (20 values), round natural particles (125 values) and crushed particles (335 values) with correlation coefficients equal to respectively 99.0%, 98.7%, 97.8%, 96.4% and mean deviations equal to respectively 0.007, 0.006, 0.007, 0.010.
Statistical inference for the additive hazards model under outcome-dependent sampling.
Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo
2015-09-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.
Modelling short term individual exposure from airborne hazardous releases in urban environments
International Nuclear Information System (INIS)
Bartzis, J.G.; Efthimiou, G.C.; Andronopoulos, S.
2015-01-01
Highlights: • The statistical behavior of the variability of individual exposure is described with a beta function. • The extreme value in the beta function is properly addressed by [5] correlation. • Two different datasets gave clear support to the proposed novel theory and its hypotheses. - Abstract: A key issue, in order to be able to cope with deliberate or accidental atmospheric releases of hazardous substances, is the ability to reliably predict the individual exposure downstream the source. In many situations, the release time and/or the health relevant exposure time is short compared to mean concentration time scales. In such a case, a significant scatter of exposure levels is expected due to the stochastic nature of turbulence. The problem becomes even more complex when dispersion occurs over urban environments. The present work is the first attempt to approximate on generic terms, the statistical behavior of the abovementioned variability with a beta distribution probability density function (beta-pdf) which has proved to be quite successful. The important issue of the extreme concentration value in beta-pdf seems to be properly addressed by the [5] correlation in which global values of its associated constants are proposed. Two substantially different datasets, the wind tunnel Michelstadt experiment and the field Mock Urban Setting Trial (MUST) experiment gave clear support to the proposed novel theory and its hypotheses. In addition, the present work can be considered as basis for further investigation and model refinements.
Industry-specific risk models for numerical scoring of hazards and prioritization of safety measures
International Nuclear Information System (INIS)
Khali, Y.F.; Johnson, K.
2004-01-01
Risk analysis consists of five cornerstones that have to be viewed in an holistic manner by risk practitioners of any organization regardless of the industry type or nature of its critical infrastructures. The cornerstones are hazard identification, risk assessment and consequence analysis, determination of risk management actions required to reduce risks to acceptable levels, communication of risk insights among the stake-holders, and continuous monitoring and verification to ensure sustained attainment of tolerable risk levels. Our primary objectives in this research are two fold: first, we compare and contrast a wide spectrum of current industry-specific and application-dependent semi-quantitative risk models. Secondly, based on the insights to be gained from the first task, we propose a framework for a robust risk-based approach for conducting security vulnerability assessment (SVA). Risk practitioners of critical infrastructures, such as commercial nuclear power plants, water utilities, chemical plants, transmission and distribution substations... etc., could readily use this proposed approach to classify, evaluate, and prioritize risks to support allocation of resources required to ensure protection of public health and safety. (author)
Examining School-Based Bullying Interventions Using Multilevel Discrete Time Hazard Modeling
Wagaman, M. Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E. C.
2014-01-01
Although schools have been trying to address bulling by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K – 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR=0.65, pbullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students’ mesosystems as well as utilizing disciplinary strategies that take into consideration student’s microsystem roles. PMID:22878779
Modelling short term individual exposure from airborne hazardous releases in urban environments
Energy Technology Data Exchange (ETDEWEB)
Bartzis, J.G., E-mail: bartzis@uowm.gr [University of Western Macedonia, Dept. of Mechanical Engineering, Sialvera & Bakola Str., 50100, Kozani (Greece); Efthimiou, G.C.; Andronopoulos, S. [Environmental Research Laboratory, INRASTES, NCSR Demokritos, Patriarchou Grigoriou & Neapoleos Str., 15310, Aghia Paraskevi (Greece)
2015-12-30
Highlights: • The statistical behavior of the variability of individual exposure is described with a beta function. • The extreme value in the beta function is properly addressed by [5] correlation. • Two different datasets gave clear support to the proposed novel theory and its hypotheses. - Abstract: A key issue, in order to be able to cope with deliberate or accidental atmospheric releases of hazardous substances, is the ability to reliably predict the individual exposure downstream the source. In many situations, the release time and/or the health relevant exposure time is short compared to mean concentration time scales. In such a case, a significant scatter of exposure levels is expected due to the stochastic nature of turbulence. The problem becomes even more complex when dispersion occurs over urban environments. The present work is the first attempt to approximate on generic terms, the statistical behavior of the abovementioned variability with a beta distribution probability density function (beta-pdf) which has proved to be quite successful. The important issue of the extreme concentration value in beta-pdf seems to be properly addressed by the [5] correlation in which global values of its associated constants are proposed. Two substantially different datasets, the wind tunnel Michelstadt experiment and the field Mock Urban Setting Trial (MUST) experiment gave clear support to the proposed novel theory and its hypotheses. In addition, the present work can be considered as basis for further investigation and model refinements.
Hazard function theory for nonstationary natural hazards
Read, L.; Vogel, R. M.
2015-12-01
Studies from the natural hazards literature indicate that many natural processes, including wind speeds, landslides, wildfires, precipitation, streamflow and earthquakes, show evidence of nonstationary behavior such as trends in magnitudes through time. Traditional probabilistic analysis of natural hazards based on partial duration series (PDS) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance is constant through time. Given evidence of trends and the consequent expected growth in devastating impacts from natural hazards across the world, new methods are needed to characterize their probabilistic behavior. The field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (x) with its failure time series (t), enabling computation of corresponding average return periods and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose PDS magnitudes are assumed to follow the widely applied Poisson-GP model. We derive a 2-parameter Generalized Pareto hazard model and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series x, with corresponding failure time series t, should have application to a wide class of natural hazards.
Anderson, E. R.; Griffin, R.; Irwin, D.
2013-12-01
Heavy rains and steep, volcanic slopes in El Salvador cause numerous landslides every year, posing a persistent threat to the population, economy and environment. Although potential debris inundation hazard zones have been delineated using digital elevation models (DEMs), some disparities exist between the simulated zones and actual affected areas. Moreover, these hazard zones have only been identified for volcanic lahars and not the shallow landslides that occur nearly every year. This is despite the availability of tools to delineate a variety of landslide types (e.g., the USGS-developed LAHARZ software). Limitations in DEM spatial resolution, age of the data, and hydrological preprocessing techniques can contribute to inaccurate hazard zone definitions. This study investigates the impacts of using different elevation models and pit filling techniques in the final debris hazard zone delineations, in an effort to determine which combination of methods most closely agrees with observed landslide events. In particular, a national DEM digitized from topographic sheets from the 1970s and 1980s provide an elevation product at a 10 meter resolution. Both natural and anthropogenic modifications of the terrain limit the accuracy of current landslide hazard assessments derived from this source. Global products from the Shuttle Radar Topography Mission (SRTM) and the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global DEM (ASTER GDEM) offer more recent data but at the cost of spatial resolution. New data derived from the NASA Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) in 2013 provides the opportunity to update hazard zones at a higher spatial resolution (approximately 6 meters). Hydrological filling of sinks or pits for current hazard zone simulation has previously been achieved through ArcInfo spatial analyst. Such hydrological processing typically only fills pits and can lead to drastic modifications of original elevation values
Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S
2017-05-30
We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.
2009-04-01
In recent years, interest in landslide hazard assessment studies has increased substantially. They are appropriate for evaluation and mitigation plan development in landslide-prone areas. There are several techniques available for landslide hazard research at a regional scale. Generally, they can be classified in two groups: qualitative and quantitative methods. Most of qualitative methods tend to be subjective, since they depend on expert opinions and represent hazard levels in descriptive terms. On the other hand, quantitative methods are objective and they are commonly used due to the correlation between the instability factors and the location of the landslides. Within this group, statistical approaches and new heuristic techniques based on artificial intelligence (artificial neural network (ANN), fuzzy logic, etc.) provide rigorous analysis to assess landslide hazard over large regions. However, they depend on qualitative and quantitative data, scale, types of movements and characteristic factors used. We analysed and compared an approach for assessing earthquake-triggered landslides hazard using logistic regression (LR) and artificial neural networks (ANN) with a back-propagation learning algorithm. One application has been developed in El Salvador, a country of Central America where the earthquake-triggered landslides are usual phenomena. In a first phase, we analysed the susceptibility and hazard associated to the seismic scenario of the 2001 January 13th earthquake. We calibrated the models using data from the landslide inventory for this scenario. These analyses require input variables representing physical parameters to contribute to the initiation of slope instability, for example, slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness, while the occurrence or non-occurrence of landslides is considered as dependent variable. The results of the landslide susceptibility analysis are checked using landslide
WCSPH with Limiting Viscosity for Modeling Landslide Hazard at the Slopes of Artificial Reservoir
Directory of Open Access Journals (Sweden)
Sauro Manenti
2018-04-01
Full Text Available This work illustrated an application of the FOSS code SPHERA v.8.0 (RSE SpA, Milano, Italy to the simulation of landslide hazard at the slope of a water basin. SPHERA is based on the weakly compressible SPH method (WCSPH and holds a mixture model, consistent with the packing limit of the Kinetic Theory of Granular Flow (KTGF, which was previously tested for simulating two-phase free-surface rapid flows involving water-sediment interaction. In this study a limiting viscosity parameter was implemented in the previous formulation of the mixture model to limit the growth of the apparent viscosity, thus saving computational time while preserving the solution accuracy. This approach is consistent with the experimental behavior of high polymer solutions for which an almost constant value of viscosity may be approached at very low deformation rates near the transition zone of elastic–plastic regime. In this application, the limiting viscosity was used as a numerical parameter for optimization of the computation. Some preliminary tests were performed by simulating a 2D erosional dam break, proving that a proper selection of the limiting viscosity leads to a considerable drop of the computational time without altering significantly the numerical solution. SPHERA was then validated by simulating a 2D scale experiment reproducing the early phase of the Vajont landslide when a tsunami wave was generated that climbed the opposite mountain side with a maximum run-up of about 270 m. The obtained maximum run-up was very close to the experimental result. Influence of saturation of the landslide material below the still water level was also accounted, showing that the landslide dynamics can be better represented and the wave run-up can be properly estimated.
A spatiotemporal optimization model for the evacuation of the population exposed to flood hazard
Alaeddine, H.; Serrhini, K.; Maizia, M.
2015-03-01
Managing the crisis caused by natural disasters, and especially by floods, requires the development of effective evacuation systems. An effective evacuation system must take into account certain constraints, including those related to traffic network, accessibility, human resources and material equipment (vehicles, collecting points, etc.). The main objective of this work is to provide assistance to technical services and rescue forces in terms of accessibility by offering itineraries relating to rescue and evacuation of people and property. We consider in this paper the evacuation of an urban area of medium size exposed to the hazard of flood. In case of inundation, most people will be evacuated using their own vehicles. Two evacuation types are addressed in this paper: (1) a preventive evacuation based on a flood forecasting system and (2) an evacuation during the disaster based on flooding scenarios. The two study sites on which the developed evacuation model is applied are the Tours valley (Fr, 37), which is protected by a set of dikes (preventive evacuation), and the Gien valley (Fr, 45), which benefits from a low rate of flooding (evacuation before and during the disaster). Our goal is to construct, for each of these two sites, a chronological evacuation plan, i.e., computing for each individual the departure date and the path to reach the assembly point (also called shelter) according to a priority list established for this purpose. The evacuation plan must avoid the congestion on the road network. Here we present a spatiotemporal optimization model (STOM) dedicated to the evacuation of the population exposed to natural disasters and more specifically to flood risk.
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Yu, Hua-Gen
2008-05-21
A spherical electron cloud hopping (SECH) model is proposed to study the product branching ratios of dissociative recombination (DR) of polyatomic systems. In this model, the fast electron-captured process is treated as an instantaneous hopping of a cloud of uniform spherical fractional point charges onto a target M+q ion (or molecule). The sum of point charges (-1) simulates the incident electron. The sphere radius is determined by a critical distance (Rc eM) between the incoming electron (e-) and the target, at which the potential energy of the e(-)-M+q system is equal to that of the electron-captured molecule M+q(-1) in a symmetry-allowed electronic state with the same structure as M(+q). During the hopping procedure, the excess energies of electron association reaction are dispersed in the kinetic energies of M+q(-1) atoms to conserve total energy. The kinetic energies are adjusted by linearly adding atomic momenta in the direction of driving forces induced by the scattering electron. The nuclear dynamics of the resultant M+q(-1) molecule are studied by using a direct ab initio dynamics method on the adiabatic potential energy surface of M+q(-1), or together with extra adiabatic surface(s) of M+q(-1). For the latter case, the "fewest switches" surface hopping algorithm of Tully was adapted to deal with the nonadiabaticity in trajectory propagations. The SECH model has been applied to study the DR of both CH+ and H3O+(H2O)2. The theoretical results are consistent with the experiment. It was found that water molecules play an important role in determining the product branching ratios of the molecular cluster ion.
International Nuclear Information System (INIS)
Pinder, John E.; Rowan, David J.; Smith, Jim T.
2016-01-01
Data from published studies and World Wide Web sources were combined to develop a regression model to predict "1"3"7Cs concentration ratios for saltwater fish. Predictions were developed from 1) numeric trophic levels computed primarily from random resampling of known food items and 2) K concentrations in the saltwater for 65 samplings from 41 different species from both the Atlantic and Pacific Oceans. A number of different models were initially developed and evaluated for accuracy which was assessed as the ratios of independently measured concentration ratios to those predicted by the model. In contrast to freshwater systems, were K concentrations are highly variable and are an important factor in affecting fish concentration ratios, the less variable K concentrations in saltwater were relatively unimportant in affecting concentration ratios. As a result, the simplest model, which used only trophic level as a predictor, had comparable accuracies to more complex models that also included K concentrations. A test of model accuracy involving comparisons of 56 published concentration ratios from 51 species of marine fish to those predicted by the model indicated that 52 of the predicted concentration ratios were within a factor of 2 of the observed concentration ratios. - Highlights: • We developed a model to predict concentration ratios (C_r) for saltwater fish. • The model requires only a single input variable to predict C_r. • That variable is a mean numeric trophic level available at (fishbase.org). • The K concentrations in seawater were not an important predictor variable. • The median-to observed ratio for 56 independently measured C_r was 0.83.
Grieco, F.; Capra, L.; Groppelli, G.; Norini, G.
2007-05-01
The present study concerns the numerical modeling of debris avalanches on the Nevado de Toluca Volcano (Mexico) using TITAN2D simulation software, and its application to create hazard maps. Nevado de Toluca is an andesitic to dacitic stratovolcano of Late Pliocene-Holocene age, located in central México near to the cities of Toluca and México City; its past activity has endangered an area with more than 25 million inhabitants today. The present work is based upon the data collected during extensive field work finalized to the realization of the geological map of Nevado de Toluca at 1:25,000 scale. The activity of the volcano has developed from 2.6 Ma until 10.5 ka with both effusive and explosive events; the Nevado de Toluca has presented long phases of inactivity characterized by erosion and emplacement of debris flow and debris avalanche deposits on its flanks. The largest epiclastic events in the history of the volcano are wide debris flows and debris avalanches, occurred between 1 Ma and 50 ka, during a prolonged hiatus in eruptive activity. Other minor events happened mainly during the most recent volcanic activity (less than 50 ka), characterized by magmatic and tectonic-induced instability of the summit dome complex. According to the most recent tectonic analysis, the active transtensive kinematics of the E-W Tenango Fault System had a strong influence on the preferential directions of the last three documented lateral collapses, which generated the Arroyo Grande and Zaguàn debris avalanche deposits towards E and Nopal debris avalanche deposit towards W. The analysis of the data collected during the field work permitted to create a detailed GIS database of the spatial and temporal distribution of debris avalanche deposits on the volcano. Flow models, that have been performed with the software TITAN2D, developed by GMFG at Buffalo, were entirely based upon the information stored in the geological database. The modeling software is built upon equations
The SCEC Community Modeling Environment(SCEC/CME): A Collaboratory for Seismic Hazard Analysis
Maechling, P. J.; Jordan, T. H.; Minster, J. B.; Moore, R.; Kesselman, C.
2005-12-01
The SCEC Community Modeling Environment (SCEC/CME) Project is an NSF-supported Geosciences/IT partnership that is actively developing an advanced information infrastructure for system-level earthquake science in Southern California. This partnership includes SCEC, USC's Information Sciences Institute (ISI), the San Diego Supercomputer Center (SDSC), the Incorporated Institutions for Research in Seismology (IRIS), and the U.S. Geological Survey. The goal of the SCEC/CME is to develop seismological applications and information technology (IT) infrastructure to support the development of Seismic Hazard Analysis (SHA) programs and other geophysical simulations. The SHA application programs developed on the Project include a Probabilistic Seismic Hazard Analysis system called OpenSHA. OpenSHA computational elements that are currently available include a collection of attenuation relationships, and several Earthquake Rupture Forecasts (ERFs). Geophysicists in the collaboration have also developed Anelastic Wave Models (AWMs) using both finite-difference and finite-element approaches. Earthquake simulations using these codes have been run for a variety of earthquake sources. Rupture Dynamic Model (RDM) codes have also been developed that simulate friction-based fault slip. The SCEC/CME collaboration has also developed IT software and hardware infrastructure to support the development, execution, and analysis of these SHA programs. To support computationally expensive simulations, we have constructed a grid-based scientific workflow system. Using the SCEC grid, project collaborators can submit computations from the SCEC/CME servers to High Performance Computers at USC and TeraGrid High Performance Computing Centers. Data generated and archived by the SCEC/CME is stored in a digital library system, the Storage Resource Broker (SRB). This system provides a robust and secure system for maintaining the association between the data seta and their metadata. To provide an easy
Directory of Open Access Journals (Sweden)
P. Horton
2013-04-01
Full Text Available The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM. The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time
Directory of Open Access Journals (Sweden)
Islam Abou El-Magd
2010-06-01
Full Text Available In the mountainous area of the Red Sea region in southeastern Egypt, the development of new mining activities or/and domestic infrastructures require reliable and accurate information about natural hazards particularly flash flood. This paper presents the assessment of flash flood hazards in the Abu Dabbab drainage basin. Remotely sensed data were used to delineate the alluvial active channels, which were integrated with morphometric parameters extracted from digital elevation models (DEM into geographical information systems (GIS to construct a hydrological model that provides estimates about the amount of surface runoff as well as the magnitude of flash floods. The peak discharge is randomly varied at different cross-sections along the main channel. Under consistent 10 mm rainfall event, the selected cross-section in middle of the main channel is prone to maximum water depth at 80 cm, which decreases to nearly 30 cm at the outlet due to transmission loss. The estimation of spatial variability of flow parameters within the catchment at different confluences of the constituting sub-catchments can be considered and used in planning for engineering foundations and linear infrastructures with the least flash flood hazard. Such information would, indeed, help decision makers and planning to minimize such hazards.
Beta-binomial model for meta-analysis of odds ratios.
Bakbergenuly, Ilyas; Kulinskaya, Elena
2017-05-20
In meta-analysis of odds ratios (ORs), heterogeneity between the studies is usually modelled via the additive random effects model (REM). An alternative, multiplicative REM for ORs uses overdispersion. The multiplicative factor in this overdispersion model (ODM) can be interpreted as an intra-class correlation (ICC) parameter. This model naturally arises when the probabilities of an event in one or both arms of a comparative study are themselves beta-distributed, resulting in beta-binomial distributions. We propose two new estimators of the ICC for meta-analysis in this setting. One is based on the inverted Breslow-Day test, and the other on the improved gamma approximation by Kulinskaya and Dollinger (2015, p. 26) to the distribution of Cochran's Q. The performance of these and several other estimators of ICC on bias and coverage is studied by simulation. Additionally, the Mantel-Haenszel approach to estimation of ORs is extended to the beta-binomial model, and we study performance of various ICC estimators when used in the Mantel-Haenszel or the inverse-variance method to combine ORs in meta-analysis. The results of the simulations show that the improved gamma-based estimator of ICC is superior for small sample sizes, and the Breslow-Day-based estimator is the best for n⩾100. The Mantel-Haenszel-based estimator of OR is very biased and is not recommended. The inverse-variance approach is also somewhat biased for ORs≠1, but this bias is not very large in practical settings. Developed methods and R programs, provided in the Web Appendix, make the beta-binomial model a feasible alternative to the standard REM for meta-analysis of ORs. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
DEFF Research Database (Denmark)
He, Peng; Eriksson, Frank; Scheike, Thomas H.
2016-01-01
function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...
Mesomechanical model and analysis of an artificial muscle functioning: role of Poisson’s ratio
Shil'ko, Serge; Chernous, Dmitry; Basinyuk, Vladimir
2016-05-01
The mechanism of force generation in a polymer monofilament actuator element with auxetic characteristics is modeled to assess the development and the optimization of a controlled drive based on the use of electrostrictive polymers. The monofilament is considered as a viscoelastic rod. By assuming a ‘sliding thread’ deformation occurring within the system, the variation of the monofilament length during the uniform contraction and force generated during a uniaxial mode of actuation have been obtained. The distribution of the axial stress was determined along the length of the monofilament at various stages during the uniform contraction. The rate of contraction reaches a maximum, together with a minimum of the stress intensity when the equivalent Poisson’s ratio of the actuator is negative.
Anton, Jose M.; Grau, Juan B.; Tarquis, Ana M.; Sanchez, Elena; Andina, Diego
2014-05-01
The authors were involved in the use of some Mathematical Decision Models, MDM, to improve knowledge and planning about some large natural or administrative areas for which natural soils, climate, and agro and forest uses where main factors, but human resources and results were important, natural hazards being relevant. In one line they have contributed about qualification of lands of the Community of Madrid, CM, administrative area in centre of Spain containing at North a band of mountains, in centre part of Iberian plateau and river terraces, and also Madrid metropolis, from an official study of UPM for CM qualifying lands using a FAO model from requiring minimums of a whole set of Soil Science criteria. The authors set first from these criteria a complementary additive qualification, and tried later an intermediate qualification from both using fuzzy logic. The authors were also involved, together with colleagues from Argentina et al. that are in relation with local planners, for the consideration of regions and of election of management entities for them. At these general levels they have adopted multi-criteria MDM, used a weighted PROMETHEE, and also an ELECTRE-I with the same elicited weights for the criteria and data, and at side AHP using Expert Choice from parallel comparisons among similar criteria structured in two levels. The alternatives depend on the case study, and these areas with monsoon climates have natural hazards that are decisive for their election and qualification with an initial matrix used for ELECTRE and PROMETHEE. For the natural area of Arroyos Menores at South of Rio Cuarto town, with at North the subarea of La Colacha, the loess lands are rich but suffer now from water erosions forming regressive ditches that are spoiling them, and use of soils alternatives must consider Soil Conservation and Hydraulic Management actions. The use of soils may be in diverse non compatible ways, as autochthonous forest, high value forest, traditional
Assessment of erosion hazard after recurrence fires with the RUSLE 3D MODEL
Vecín-Arias, Daniel; Palencia, Covadonga; Fernández Raga, María
2016-04-01
The objective of this work is to calculate if there is more soil erosion after the recurrence of several forest fires on an area. To that end, it has been studied an area of 22 130 ha because has a high frequency of fires. This area is located in the northwest of the Iberian Peninsula. The assessment of erosion hazard was calculated in several times using Geographic Information Systems (GIS).The area have been divided into several plots according to the number of times they have been burnt in the past 15 years. Due to the complexity that has to make a detailed study of a so large field and that there are not information available anually, it is necessary to select the more interesting moments. In august 2012 it happened the most agressive and extensive fire of the area. So the study was focused on the erosion hazard for 2011 and 2014, because they are the date before and after from the fire of 2012 in which there are orthophotos available. RUSLE3D model (Revised Universal Soil Loss Equation) was used to calculate maps erosion losses. This model improves the traditional USLE (Wischmeier and D., 1965) because it studies the influence of the concavity / convexity (Renard et al., 1997), and improves the estimation of the slope factor LS (Renard et al., 1991). It is also one of the most commonly used models in literatura (Mitasova et al., 1996; Terranova et al., 2009). The tools used are free and accessible, using GIS "gvSIG" (http://www.gvsig.com/es) and the metadata were taken from Spatial Data Infrastructure of Spain webpage (IDEE, 2016). However the RUSLE model has many critics as some authors who suggest that only serves to carry out comparisons between areas, and not for the calculation of absolute soil loss data. These authors argue that in field measurements the actual recovered eroded soil can suppose about one-third of the values obtained with the model (Šúri et al., 2002). The study of the area shows that the error detected by the critics could come from
Large signal-to-noise ratio quantification in MLE for ARARMAX models
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu
2015-06-01
Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.
Stanley, Dal; Villaseñor, Antonio; Benz, Harley
1999-01-01
The Cascadia subduction zone is extremely complex in the western Washington region, involving local deformation of the subducting Juan de Fuca plate and complicated block structures in the crust. It has been postulated that the Cascadia subduction zone could be the source for a large thrust earthquake, possibly as large as M9.0. Large intraplate earthquakes from within the subducting Juan de Fuca plate beneath the Puget Sound region have accounted for most of the energy release in this century and future such large earthquakes are expected. Added to these possible hazards is clear evidence for strong crustal deformation events in the Puget Sound region near faults such as the Seattle fault, which passes through the southern Seattle metropolitan area. In order to understand the nature of these individual earthquake sources and their possible interrelationship, we have conducted an extensive seismotectonic study of the region. We have employed P-wave velocity models developed using local earthquake tomography as a key tool in this research. Other information utilized includes geological, paleoseismic, gravity, magnetic, magnetotelluric, deformation, seismicity, focal mechanism and geodetic data. Neotectonic concepts were tested and augmented through use of anelastic (creep) deformation models based on thin-plate, finite-element techniques developed by Peter Bird, UCLA. These programs model anelastic strain rate, stress, and velocity fields for given rheological parameters, variable crust and lithosphere thicknesses, heat flow, and elevation. Known faults in western Washington and the main Cascadia subduction thrust were incorporated in the modeling process. Significant results from the velocity models include delineation of a previously studied arch in the subducting Juan de Fuca plate. The axis of the arch is oriented in the direction of current subduction and asymmetrically deformed due to the effects of a northern buttress mapped in the velocity models. This
Pagano, Alessandro; Pluchinotta, Irene; Giordano, Raffaele; Vurro, Michele
2016-04-01
Resilience has recently become a key concept, and a crucial paradigm in the analysis of the impacts of natural disasters, mainly concerning Lifeline Systems (LS). Indeed, the traditional risk management approaches require a precise knowledge of all potential hazards and a full understanding of the interconnections among different infrastructures, based on past events and trends analysis. Nevertheless, due to the inner complexity of LS, their interconnectedness and the dynamic context in which they operate (i.e. technology, economy and society), it is difficult to gain a complete comprehension of the processes influencing vulnerabilities and threats. Therefore, resilience thinking addresses the complexities of large integrated systems and the uncertainty of future threats, emphasizing the absorbing, adapting and responsive behavior of the system. Resilience thinking approaches are focused on the capability of the system to deal with the unforeseeable. The increasing awareness of the role played by LS, has led governmental agencies and institutions to develop resilience management strategies. Risk prone areas, such as cities, are highly dependent on infrastructures providing essential services that support societal functions, safety, economic prosperity and quality of life. Among the LS, drinking water supply is critical for supporting citizens during emergency and recovery, since a disruption could have a range of serious societal impacts. A very well-known method to assess LS resilience is the TOSE approach. The most interesting feature of this approach is the integration of four dimensions: Technical, Organizational, Social and Economic. Such issues are all concurrent to the resilience level of an infrastructural system, and should be therefore quantitatively assessed. Several researches underlined that the lack of integration among the different dimensions, composing the resilience concept, may contribute to a mismanagement of LS in case of natural disasters
Relative Hazard Calculation Methodology
International Nuclear Information System (INIS)
DL Strenge; MK White; RD Stenner; WB Andrews
1999-01-01
The methodology presented in this document was developed to provide a means of calculating the RH ratios to use in developing useful graphic illustrations. The RH equation, as presented in this methodology, is primarily a collection of key factors relevant to understanding the hazards and risks associated with projected risk management activities. The RH equation has the potential for much broader application than generating risk profiles. For example, it can be used to compare one risk management activity with another, instead of just comparing it to a fixed baseline as was done for the risk profiles. If the appropriate source term data are available, it could be used in its non-ratio form to estimate absolute values of the associated hazards. These estimated values of hazard could then be examined to help understand which risk management activities are addressing the higher hazard conditions at a site. Graphics could be generated from these absolute hazard values to compare high-hazard conditions. If the RH equation is used in this manner, care must be taken to specifically define and qualify the estimated absolute hazard values (e.g., identify which factors were considered and which ones tended to drive the hazard estimation)
Celis, C.; Sepulveda, S. A.; Castruccio, A.; Lara, M.
2017-12-01
Debris and mudflows are some of the main geological hazards in the mountain foothills of Central Chile. The risk of flows triggered in the basins of ravines that drain the Andean frontal range into the capital city, Santiago, increases with time due to accelerated urban expansion. Susceptibility assessments were made by several authors to detect the main active ravines in the area. Macul and San Ramon ravines have a high to medium debris flow susceptibility, whereas Lo Cañas, Apoquindo and Las Vizcachas ravines have a medium to low debris flow susceptibility. This study emphasizes in delimiting the potential hazardous zones using the numerical simulation program RAMMS-Debris Flows with the Voellmy model approach, and the debris-flow model LAHARZ. This is carried out by back-calculating the frictional parameters in the depositional zone with a known event as the debris and mudflows in Macul and San Ramon ravines, on May 3rd, 1993, for the RAMMS approach. In the same scenario, we calibrate the coefficients to match conditions of the mountain foothills of Santiago for the LAHARZ model. We use the information obtained for every main ravine in the study area, mainly for the similarity in slopes and material transported. Simulations were made for the worst-case scenario, caused by the combination of intense rainfall storms, a high 0°C isotherm level and material availability in the basins where the flows are triggered. The results show that the runout distances are well simulated, therefore a debris-flow hazard map could be developed with these models. Correlation issues concerning the run-up, deposit thickness and transversal areas are reported. Hence, the models do not represent entirely the complexity of the phenomenon, but they are a reliable approximation for preliminary hazard maps.
Hydrology Analysis and Modelling for Klang River Basin Flood Hazard Map
Sidek, L. M.; Rostam, N. E.; Hidayah, B.; Roseli, ZA; Majid, W. H. A. W. A.; Zahari, N. Z.; Salleh, S. H. M.; Ahmad, R. D. R.; Ahmad, M. N.
2016-03-01
Flooding, a common environmental hazard worldwide has in recent times, increased as a result of climate change and urbanization with the effects felt more in developing countries. As a result, the explosive of flooding to Tenaga Nasional Berhad (TNB) substation is increased rapidly due to existing substations are located in flood prone area. By understanding the impact of flood to their substation, TNB has provided the non-structure mitigation with the integration of Flood Hazard Map with their substation. Hydrology analysis is the important part in providing runoff as the input for the hydraulic part.
Kaon-pion ratio from ISR results and the derived sea level muon spectrum from Maeda's model
Bhattacharya, D P
1978-01-01
The sea-level muon spectrum has been calculated using Maeda's (1973) model. The contribution of the muon flux caused by kaon decay has been included in the calculation as the kaon-pion ratio. The value used for this ratio is that determined by the CERN Intersecting Storage Ring Group, Antinucci et al. (1973). (7 refs).
Wind vs Water in Hurricanes: The Challenge of Multi-peril Hazard Modeling
Powell, M. D.
2017-12-01
operational solution to collect wind and water level measurements, and to conduct observation based modeling of wind and water impacts. My presentation will discuss some of the challenges to wind and water hazard monitoring and modeling.
Male sexual strategies modify ratings of female models with specific waist-to-hip ratios.
Brase, Gary L; Walker, Gary
2004-06-01
Female waist-to-hip ratio (WHR) has generally been an important general predictor of ratings of physical attractiveness and related characteristics. Individual differences in ratings do exist, however, and may be related to differences in the reproductive tactics of the male raters such as pursuit of short-term or long-term relationships and adjustments based on perceptions of one's own quality as a mate. Forty males, categorized according to sociosexual orientation and physical qualities (WHR, Body Mass Index, and self-rated desirability), rated female models on both attractiveness and likelihood they would approach them. Sociosexually restricted males were less likely to approach females rated as most attractive (with 0.68-0.72 WHR), as compared with unrestricted males. Males with lower scores in terms of physical qualities gave ratings indicating more favorable evaluations of female models with lower WHR. The results indicate that attractiveness and willingness to approach are overlapping but distinguishable constructs, both of which are influenced by variations in characteristics of the raters.
Mitavskiy, Boris; Cannings, Chris
2009-01-01
The evolutionary algorithm stochastic process is well-known to be Markovian. These have been under investigation in much of the theoretical evolutionary computing research. When the mutation rate is positive, the Markov chain modeling of an evolutionary algorithm is irreducible and, therefore, has a unique stationary distribution. Rather little is known about the stationary distribution. In fact, the only quantitative facts established so far tell us that the stationary distributions of Markov chains modeling evolutionary algorithms concentrate on uniform populations (i.e., those populations consisting of a repeated copy of the same individual). At the same time, knowing the stationary distribution may provide some information about the expected time it takes for the algorithm to reach a certain solution, assessment of the biases due to recombination and selection, and is of importance in population genetics to assess what is called a "genetic load" (see the introduction for more details). In the recent joint works of the first author, some bounds have been established on the rates at which the stationary distribution concentrates on the uniform populations. The primary tool used in these papers is the "quotient construction" method. It turns out that the quotient construction method can be exploited to derive much more informative bounds on ratios of the stationary distribution values of various subsets of the state space. In fact, some of the bounds obtained in the current work are expressed in terms of the parameters involved in all the three main stages of an evolutionary algorithm: namely, selection, recombination, and mutation.
International Nuclear Information System (INIS)
Pinder, John E.; Rowan, David J.; Rasmussen, Joseph B.; Smith, Jim T.; Hinton, Thomas G.; Whicker, F.W.
2014-01-01
Data from published studies and World Wide Web sources were combined to produce and test a regression model to predict Cs concentration ratios for freshwater fish species. The accuracies of predicted concentration ratios, which were computed using 1) species trophic levels obtained from random resampling of known food items and 2) K concentrations in the water for 207 fish from 44 species and 43 locations, were tested against independent observations of ratios for 57 fish from 17 species from 25 locations. Accuracy was assessed as the percent of observed to predicted ratios within factors of 2 or 3. Conservatism, expressed as the lack of under prediction, was assessed as the percent of observed to predicted ratios that were less than 2 or less than 3. The model's median observed to predicted ratio was 1.26, which was not significantly different from 1, and 50% of the ratios were between 0.73 and 1.85. The percentages of ratios within factors of 2 or 3 were 67 and 82%, respectively. The percentages of ratios that were <2 or <3 were 79 and 88%, respectively. An example for Perca fluviatilis demonstrated that increased prediction accuracy could be obtained when more detailed knowledge of diet was available to estimate trophic level. - Highlights: • We developed a model to predict Cs concentration ratios for freshwater fish species. • The model uses only two variables to predict a species CR for any location. • One variable is the K concentration in the freshwater. • The other is a species mean trophic level measure easily obtained from (fishbase.org). • The median observed to predicted ratio for 57 independent test cases was 1.26
Wang, Wenjiao B.; Abelson, John R.
2014-11-01
Complete filling of a deep recessed structure with a second material is a challenge in many areas of nanotechnology fabrication. A newly discovered superconformal coating method, applicable in chemical vapor deposition systems that utilize a precursor in combination with a co-reactant, can solve this problem. However, filling is a dynamic process in which the trench progressively narrows and the aspect ratio (AR) increases. This reduces species diffusion within the trench and may drive the component partial pressures out of the regime for superconformal coating. We therefore derive two theoretical models that can predict the possibility for filling. First, we recast the diffusion-reaction equation for the case of a sidewall with variable taper angle. This affords a definition of effective AR, which is larger than the nominal AR due to the reduced species transport. We then derive the coating profile, both for superconformal and for conformal coating. The critical (most difficult) step in the filling process occurs when the sidewalls merge at the bottom of the trench to form the V shape. Experimentally, for the Mg(DMADB)2/H2O system and a starting AR = 9, this model predicts that complete filling will not be possible, whereas experimentally we do obtain complete filling. We then hypothesize that glancing-angle, long-range transport of species may be responsible for the better than predicted filling. To account for the variable range of species transport, we construct a ballistic transport model. This incorporates the incident flux from outside the structure, cosine law re-emission from surfaces, and line-of-sight transport between internal surfaces. We cast the transport probability between all positions within the trench into a matrix that represents the redistribution of flux after one cycle of collisions. Matrix manipulation then affords a computationally efficient means to determine the steady-state flux distribution and growth rate for a given taper angle. The
a study of the slope of cox proportional hazard and weibull models
African Journals Online (AJOL)
Adejumo & Ahmadu
known and the hazard function is completely specified except for the values of the ... through the air when people who have an active TB infection, cough, sneeze ... The increase of. TB incidence is highest in Africa and Asia, areas with the highest ... further complicating treatment by increasing the length and cost of therapy.
Spatial Modelling of Urban Physical Vulnerability to Explosion Hazards Using GIS and Fuzzy MCDA
Directory of Open Access Journals (Sweden)
Yasser Ebrahimian Ghajari
2017-07-01
Full Text Available Most of the world’s population is concentrated in accumulated spaces in the form of cities, making the concept of urban planning a significant issue for consideration by decision makers. Urban vulnerability is a major issue which arises in urban management, and is simply defined as how vulnerable various structures in a city are to different hazards. Reducing urban vulnerability and enhancing resilience are considered to be essential steps towards achieving urban sustainability. To date, a vast body of literature has focused on investigating urban systems’ vulnerabilities with regard to natural hazards. However, less attention has been paid to vulnerabilities resulting from man-made hazards. This study proposes to investigate the physical vulnerability of buildings in District 6 of Tehran, Iran, with respect to intentional explosion hazards. A total of 14 vulnerability criteria are identified according to the opinions of various experts, and standard maps for each of these criteria have been generated in a GIS environment. Ultimately, an ordered weighted averaging (OWA technique was applied to generate vulnerability maps for different risk conditions. The results of the present study indicate that only about 25 percent of buildings in the study area have a low level of vulnerability under moderate risk conditions. Sensitivity analysis further illustrates the robustness of the results obtained. Finally, the paper concludes by arguing that local authorities must focus more on risk-reduction techniques in order to reduce physical vulnerability and achieve urban sustainability.
International Nuclear Information System (INIS)
Anon.
1995-01-01
This conference was held September 26--29, 1995 in New Orleans, Louisiana. The purpose of this conference was to provide a multidisciplinary forum for exchange of state-of-the-art information on the consequences of accidental releases of hazardous materials. Attention is focused on air dispersion of vapors. Individual papers have been processed separately for inclusion in the appropriate data bases
Incorporating fine-scale drought information into an eastern US wildfire hazard model
Matthew P. Peters; Louis R. Iverson
2017-01-01
Wildfires in the eastern United States are generally caused by humans in locations where human development and natural vegetation intermingle, e.g. the wildlandâurban interface (WUI). Knowing where wildfire hazards are elevated across the forested landscape may help land managers and property owners plan or allocate resources for potential wildfire threats. In an...
Ale, B.J.M.; Hanea, D.M.; Sillem, S.; Lin, P.H.; Van Gulijk, C.; Hudson, P.T.W.
2012-01-01
Recent disasters in high hazard industries such as Oil and Gas Exploration (The Deepwater Horizon) and Petrochemical production (Texas City) have been found to have causes that range from direct technical failures through organizational shortcomings right up to weak regulation and inappropriate
Assessing End-Of-Supply Risk of Spare Parts Using the Proportional Hazard Model
X. Li (Xishu); R. Dekker (Rommert); C. Heij (Christiaan); M. Hekimoğlu (Mustafa)
2016-01-01
textabstractOperators of long field-life systems like airplanes are faced with hazards in the supply of spare parts. If the original manufacturers or suppliers of parts end their supply, this may have large impacts on operating costs of firms needing these parts. Existing end-of-supply evaluation
Cocco, M.
2001-12-01
Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to
Cheung, Weng-Fong; Lin, Tzu-Hsuan; Lin, Yu-Cheng
2018-02-02
In recent years, many studies have focused on the application of advanced technology as a way to improve management of construction safety management. A Wireless Sensor Network (WSN), one of the key technologies in Internet of Things (IoT) development, enables objects and devices to sense and communicate environmental conditions; Building Information Modeling (BIM), a revolutionary technology in construction, integrates database and geometry into a digital model which provides a visualized way in all construction lifecycle management. This paper integrates BIM and WSN into a unique system which enables the construction site to visually monitor the safety status via a spatial, colored interface and remove any hazardous gas automatically. Many wireless sensor nodes were placed on an underground construction site and to collect hazardous gas level and environmental condition (temperature and humidity) data, and in any region where an abnormal status is detected, the BIM model will alert the region and an alarm and ventilator on site will start automatically for warning and removing the hazard. The proposed system can greatly enhance the efficiency in construction safety management and provide an important reference information in rescue tasks. Finally, a case study demonstrates the applicability of the proposed system and the practical benefits, limitations, conclusions, and suggestions are summarized for further applications.
Soil-to-Plant Concentration Ratios for Assessing Food Chain Pathways in Biosphere Models
Energy Technology Data Exchange (ETDEWEB)
Napier, Bruce A.; Fellows, Robert J.; Krupka, Kenneth M.
2007-10-01
This report describes work performed for the U.S. Nuclear Regulatory Commission’s project Assessment of Food Chain Pathway Parameters in Biosphere Models, which was established to assess and evaluate a number of key parameters used in the food-chain models used in performance assessments of radioactive waste disposal facilities. Section 2 of this report summarizes characteristics of samples of soils and groundwater from three geographical regions of the United States, the Southeast, Northwest, and Southwest, and analyses performed to characterize their physical and chemical properties. Because the uptake and behavior of radionuclides in plant roots, plant leaves, and animal products depends on the chemistry of the water and soil coming in contact with plants and animals, water and soil samples collected from these regions of the United States were used in experiments at Pacific Northwest National Laboratory to determine radionuclide soil-to-plant concentration ratios. Crops and forage used in the experiments were grown in the soils, and long-lived radionuclides introduced into the groundwater provide the contaminated water used to water the grown plants. The radionuclides evaluated include 99Tc, 238Pu, and 241Am. Plant varieties include alfalfa, corn, onion, and potato. The radionuclide uptake results from this research study show how regional variations in water quality and soil chemistry affect radionuclide uptake. Section 3 summarizes the procedures and results of the uptake experiments, and relates the soil-to-plant uptake factors derived. In Section 4, the results found in this study are compared with similar values found in the biosphere modeling literature; the study’s results are generally in line with current literature, but soil- and plant-specific differences are noticeable. This food-chain pathway data may be used by the NRC staff to assess dose to persons in the reference biosphere (e.g., persons who live and work in an area potentially affected by
... chemicals can still harm human health and the environment. When you throw these substances away, they become hazardous waste. Some hazardous wastes come from products in our homes. Our garbage can include such hazardous wastes as old batteries, bug spray cans and paint thinner. U.S. residents ...
Kourgialas, N. N.; Karatzas, G. P.
2014-03-01
A modeling system for the estimation of flash flood flow velocity and sediment transport is developed in this study. The system comprises three components: (a) a modeling framework based on the hydrological model HSPF, (b) the hydrodynamic module of the hydraulic model MIKE 11 (quasi-2-D), and (c) the advection-dispersion module of MIKE 11 as a sediment transport model. An important parameter in hydraulic modeling is the Manning's coefficient, an indicator of the channel resistance which is directly dependent on riparian vegetation changes. Riparian vegetation's effect on flood propagation parameters such as water depth (inundation), discharge, flow velocity, and sediment transport load is investigated in this study. Based on the obtained results, when the weed-cutting percentage is increased, the flood wave depth decreases while flow discharge, velocity and sediment transport load increase. The proposed modeling system is used to evaluate and illustrate the flood hazard for different riparian vegetation cutting scenarios. For the estimation of flood hazard, a combination of the flood propagation characteristics of water depth, flow velocity and sediment load was used. Next, a well-balanced selection of the most appropriate agricultural cutting practices of riparian vegetation was performed. Ultimately, the model results obtained for different agricultural cutting practice scenarios can be employed to create flood protection measures for flood-prone areas. The proposed methodology was applied to the downstream part of a small Mediterranean river basin in Crete, Greece.
Directory of Open Access Journals (Sweden)
Leah M. Courtland
2012-07-01
Full Text Available The Tephra2 numerical model for tephra fallout from explosive volcanic eruptions is specifically designed to enable students to probe ideas in model literacy, including code validation and verification, the role of simplifying assumptions, and the concepts of uncertainty and forecasting. This numerical model is implemented on the VHub.org website, a venture in cyberinfrastructure that brings together volcanological models and educational materials. The VHub.org resource provides students with the ability to explore and execute sophisticated numerical models like Tephra2. We present a strategy for using this model to introduce university students to key concepts in the use and evaluation of Tephra2 for probabilistic forecasting of volcanic hazards. Through this critical examination students are encouraged to develop a deeper understanding of the applicability and limitations of hazard models. Although the model and applications are intended for use in both introductory and advanced geoscience courses, they could easily be adapted to work in other disciplines, such as astronomy, physics, computational methods, data analysis, or computer science.
Wang, Junjie; He, Jiangtao; Chen, Honghan
2012-08-15
Groundwater contamination risk assessment is an effective tool for groundwater management. Most existing risk assessment methods only consider the basic contamination process based upon evaluations of hazards and aquifer vulnerability. In view of groundwater exploitation potentiality, including the value of contamination-threatened groundwater could provide relatively objective and targeted results to aid in decision making. This study describes a groundwater contamination risk assessment method that integrates hazards, intrinsic vulnerability and groundwater value. The hazard harmfulness was evaluated by quantifying contaminant properties and infiltrating contaminant load, the intrinsic aquifer vulnerability was evaluated using a modified DRASTIC model and the groundwater value was evaluated based on groundwater quality and aquifer storage. Two groundwater contamination risk maps were produced by combining the above factors: a basic risk map and a value-weighted risk map. The basic risk map was produced by overlaying the hazard map and the intrinsic vulnerability map. The value-weighted risk map was produced by overlaying the basic risk map and the groundwater value map. Relevant validation was completed by contaminant distributions and site investigation. Using Beijing Plain, China, as an example, thematic maps of the three factors and the two risks were generated. The thematic maps suggested that landfills, gas stations and oil depots, and industrial areas were the most harmful potential contamination sources. The western and northern parts of the plain were the most vulnerable areas and had the highest groundwater value. Additionally, both the basic and value-weighted risk classes in the western and northern parts of the plain were the highest, indicating that these regions should deserve the priority of concern. Thematic maps should be updated regularly because of the dynamic characteristics of hazards. Subjectivity and validation means in assessing the
Combining slope stability and groundwater flow models to assess stratovolcano collapse hazard
Ball, J. L.; Taron, J.; Reid, M. E.; Hurwitz, S.; Finn, C.; Bedrosian, P.
2016-12-01
Flank collapses are a well-documented hazard at volcanoes. Elevated pore-fluid pressures and hydrothermal alteration are invoked as potential causes for the instability in many of these collapses. Because pore pressure is linked to water saturation and permeability of volcanic deposits, hydrothermal alteration is often suggested as a means of creating low-permeability zones in volcanoes. Here, we seek to address the question: What alteration geometries will produce elevated pore pressures in a stratovolcano, and what are the effects of these elevated pressures on slope stability? We initially use a finite element groundwater flow model (a modified version of OpenGeoSys) to simulate `generic' stratovolcano geometries that produce elevated pore pressures. We then input these results into the USGS slope-stability code Scoops3D to investigate the effects of alteration and magmatic intrusion on potential flank failure. This approach integrates geophysical data about subsurface alteration, water saturation and rock mechanical properties with data about precipitation and heat influx at Cascade stratovolcanoes. Our simulations show that it is possible to maintain high-elevation water tables in stratovolcanoes given specific ranges of edifice permeability (ideally between 10-15 and 10-16 m2). Low-permeability layers (10-17 m2, representing altered pyroclastic deposits or altered breccias) in the volcanoes can localize saturated regions close to the surface, but they may actually reduce saturation, pore pressures, and water table levels in the core of the volcano. These conditions produce universally lower factor-of-safety (F) values than at an equivalent dry edifice with the same material properties (lower values of F indicate a higher likelihood of collapse). When magmatic intrusions into the base of the cone are added, near-surface pore pressures increase and F decreases exponentially with time ( 7-8% in the first year). However, while near-surface impermeable layers
Flood susceptibility analysis through remote sensing, GIS and frequency ratio model
Samanta, Sailesh; Pal, Dilip Kumar; Palsamanta, Babita
2018-05-01
Papua New Guinea (PNG) is saddled with frequent natural disasters like earthquake, volcanic eruption, landslide, drought, flood etc. Flood, as a hydrological disaster to humankind's niche brings about a powerful and often sudden, pernicious change in the surface distribution of water on land, while the benevolence of flood manifests in restoring the health of the thalweg from excessive siltation by redistributing the fertile sediments on the riverine floodplains. In respect to social, economic and environmental perspective, flood is one of the most devastating disasters in PNG. This research was conducted to investigate the usefulness of remote sensing, geographic information system and the frequency ratio (FR) for flood susceptibility mapping. FR model was used to handle different independent variables via weighted-based bivariate probability values to generate a plausible flood susceptibility map. This study was conducted in the Markham riverine precinct under Morobe province in PNG. A historical flood inventory database of PNG resource information system (PNGRIS) was used to generate 143 flood locations based on "create fishnet" analysis. 100 (70%) flood sample locations were selected randomly for model building. Ten independent variables, namely land use/land cover, elevation, slope, topographic wetness index, surface runoff, landform, lithology, distance from the main river, soil texture and soil drainage were used into the FR model for flood vulnerability analysis. Finally, the database was developed for areas vulnerable to flood. The result demonstrated a span of FR values ranging from 2.66 (least flood prone) to 19.02 (most flood prone) for the study area. The developed database was reclassified into five (5) flood vulnerability zones segmenting on the FR values, namely very low (less that 5.0), low (5.0-7.5), moderate (7.5-10.0), high (10.0-12.5) and very high susceptibility (more than 12.5). The result indicated that about 19.4% land area as `very high
Keith, A. M.; Weigel, A. M.; Rivas, J.
2014-12-01
Copahue is a stratovolcano located along the rim of the Caviahue Caldera near the Chile-Argentina border in the Andes Mountain Range. There are several small towns located in proximity of the volcano with the two largest being Banos Copahue and Caviahue. During its eruptive history, it has produced numerous lava flows, pyroclastic flows, ash deposits, and lahars. This isolated region has steep topography and little vegetation, rendering it poorly monitored. The need to model volcanic hazard risk has been reinforced by recent volcanic activity that intermittently released several ash plumes from December 2012 through May 2013. Exposure to volcanic ash is currently the main threat for the surrounding populations as the volcano becomes more active. The goal of this project was to study Copahue and determine areas that have the highest potential of being affected in the event of an eruption. Remote sensing techniques were used to examine and identify volcanic activity and areas vulnerable to experiencing volcanic hazards including volcanic ash, SO2 gas, lava flow, pyroclastic density currents and lahars. Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Landsat 8 Operational Land Imager (OLI), EO-1 Advanced Land Imager (ALI), Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Shuttle Radar Topography Mission (SRTM), ISS ISERV Pathfinder, and Aura Ozone Monitoring Instrument (OMI) products were used to analyze volcanic hazards. These datasets were used to create a historic lava flow map of the Copahue volcano by identifying historic lava flows, tephra, and lahars both visually and spectrally. Additionally, a volcanic risk and hazard map for the surrounding area was created by modeling the possible extent of ash fallout, lahars, lava flow, and pyroclastic density currents (PDC) for future eruptions. These model results were then used to identify areas that should be prioritized for disaster relief and evacuation orders.
The model of fraud detection in financial statements by means of financial ratios
Kanapickienė, Rasa; Grundienė, Živilė
2015-01-01
Analysis of financial ratios is one of those simple methods to identify frauds. Theoretical survey revealed that, in scientific literature, financial ratios are analysed in order to designate which ratios of the financial statements are the most sensitive in relation with the motifs of executive managers and employees of companies to commit frauds. Empirical study included the analysis of the following: 1) 40 sets of fraudulent financial statements and 2) 125 sets of non-fraudulent financ...
Foxp3 regulates ratio of Treg and NKT cells in a mouse model of asthma.
Lu, Yanming; Guo, Yinshi; Xu, Linyun; Li, Yaqin; Cao, Lanfang
2015-05-01
Chronic inflammatory disorder of the airways causes asthma. Regulatory T cells (Treg cells) and Natural killer T cells (NKT cells) both play critical roles in the pathogenesis of asthma. Activation of Treg cells requires Foxp3, whereas whether Foxp3 may regulate the ratio of Treg and NKT cells to affect asthma is uncertain. In an ovalbumin (OVA)-induced mouse model of asthma, we either increased Treg cells by lentivirus-mediated forced expression of exogenous Foxp3, or increased NKT cells by stimulation with its activator α-GalCer. We found that the CD4+CD25+ Treg cells increased by forced Foxp3 expression, and decreased by α-GalCer, while the CD3+CD161+ NKT cells decreased by forced Foxp3 expression, and increased by α-GalCer. Moreover, forced Foxp3 expression, but not α-GalCer, significantly alleviated the hallmarks of asthma. Furthermore, forced Foxp3 increased levels of IL_10 and TGFβ1, and α-GalCer increased levels of IL_4 and INFγ in the OVA-treated lung. Taken together, our study suggests that Foxp3 may activate Treg cells and suppress NKT cells in asthma. Treg and NKT cells may antagonize the effects of each other in asthma.
A multiscale method for modeling high-aspect-ratio micro/nano flows
Lockerby, Duncan; Borg, Matthew; Reese, Jason
2012-11-01
In this paper we present a new multiscale scheme for simulating micro/nano flows of high aspect ratio in the flow direction, e.g. within long ducts, tubes, or channels, of varying section. The scheme consists of applying a simple hydrodynamic description over the entire domain, and allocating micro sub-domains in very small ``slices'' of the channel. Every micro element is a molecular dynamics simulation (or other appropriate model, e.g., a direct simulation Monte Carlo method for micro-channel gas flows) over the local height of the channel/tube. The number of micro elements as well as their streamwise position is chosen to resolve the geometrical features of the macro channel. While there is no direct communication between individual micro elements, coupling occurs via an iterative imposition of mass and momentum-flux conservation on the macro scale. The greater the streamwise scale of the geometry, the more significant is the computational speed-up when compared to a full MD simulation. We test our new multiscale method on the case of a converging/diverging nanochannel conveying a simple Lennard-Jones liquid. We validate the results from our simulations by comparing them to a full MD simulation of the same test case. Supported by EPSRC Programme Grant, EP/I011927/1.
Johnson, Branden B; Hallman, William K; Cuite, Cara L
2015-03-01
Perceptions of institutions that manage hazards are important because they can affect how the public responds to hazard events. Antecedents of trust judgments have received far more attention than antecedents of attributions of responsibility for hazard events. We build upon a model of retrospective attribution of responsibility to individuals to examine these relationships regarding five classes of institutions that bear responsibility for food safety: producers (e.g., farmers), processors (e.g., packaging firms), watchdogs (e.g., government agencies), sellers (e.g., supermarkets), and preparers (e.g., restaurants). A nationally representative sample of 1,200 American adults completed an Internet-based survey in which a hypothetical scenario involving contamination of diverse foods with Salmonella served as the stimulus event. Perceived competence and good intentions of the institution moderately decreased attributions of responsibility. A stronger factor was whether an institution was deemed (potentially) aware of the contamination and free to act to prevent or mitigate it. Responsibility was rated higher the more aware and free the institution. This initial model for attributions of responsibility to impersonal institutions (as opposed to individual responsibility) merits further development. © 2014 Society for Risk Analysis.
International Nuclear Information System (INIS)
Luria, Paolo; Aspinall, Peter A.
2003-01-01
The aim of this paper is to describe a new approach to major industrial hazard assessment, which has been recently studied by the authors in conjunction with the Italian Environmental Protection Agency ('ARPAV'). The real opportunity for developing a different approach arose from the need of the Italian EPA to provide the Venice Port Authority with an appropriate estimation of major industrial hazards in Porto Marghera, an industrial estate near Venice (Italy). However, the standard model, the quantitative risk analysis (QRA), only provided a list of individual quantitative risk values, related to single locations. The experimental model is based on a multi-criteria approach--the Analytic Hierarchy Process--which introduces the use of expert opinions, complementary skills and expertise from different disciplines in conjunction with quantitative traditional analysis. This permitted the generation of quantitative data on risk assessment from a series of qualitative assessments, on the present situation and on three other future scenarios, and use of this information as indirect quantitative measures, which could be aggregated for obtaining the global risk rate. This approach is in line with the main concepts proposed by the last European directive on Major Hazard Accidents, which recommends increasing the participation of operators, taking the other players into account and, moreover, paying more attention to the concepts of 'urban control', 'subjective risk' (risk perception) and intangible factors (factors not directly quantifiable)
International Nuclear Information System (INIS)
Rempe, N.T.
1991-01-01
For 18 years, The Herfa-Neurode underground repository has demonstrated the environmentally sound disposal of hazardous waste in a former potash mine. Its principal characteristics make it an excellent analogue to the Waste Isolation Pilot Plant (WIPP). The Environmental Protection Agency has ruled in its first conditional no-migration determination that is reasonably certain that no hazardous constituents of the mixed waste, destined for the WIPP during its test phase, will migrate from the site for up to ten years. Knowledge of and reference to the Herfa-Neurode operating model may substantially improve the no-migration variance petition for the WIPP's disposal phase and thereby expedite its approval. 2 refs., 1 fig., 1 tab
LISREL Model Medical Solid Infectious Waste Hazardous Hospital Management In Medan City
Simarmata, Verawaty; Siahaan, Ungkap; Pandia, Setiaty; Mawengkang, Herman
2018-01-01
Hazardous and toxic waste resulting from activities at most hospitals contain various elements of medical solid waste ranging from heavy metals that have the nature of accumulative toxic which are harmful to human health. Medical waste in the form of gas, liquid or solid generally include the category or the nature of the hazard and toxicity waste. The operational in activities of the hospital aims to improve the health and well-being, but it also produces waste as an environmental pollutant waters, soil and gas. From the description of the background of the above in mind that the management of solid waste pollution control medical hospital, is one of the fundamental problems in the city of Medan and application supervision is the main business licensing and control alternatives in accordance with applicable regulations.
Sex ratio selection and multi-factorial sex determination in the housefly : A dynamic model
Kozielska, M.A.; Pen, I.R.; Beukeboom, L.W.; Weissing, F.J.
Sex determining (SD) mechanisms are highly variable between different taxonomic groups and appear to change relatively quickly during evolution. Sex ratio selection could be a dominant force causing such changes. We investigate theoretically the effect of sex ratio selection on the dynamics of a
DEFF Research Database (Denmark)
Huang, Lam Opal; Infante-RIvard, Claire; Labbe, Aurélie
2016-01-01
Transmission of the two parental alleles to offspring deviating from the Mendelian ratio is termed Transmission Ratio Distortion (TRD), occurs throughout gametic and embryonic development. TRD has been well-studied in animals, but remains largely unknown in humans. The Transmission Disequilibrium...
Energy Technology Data Exchange (ETDEWEB)
Antoun, T; Harris, D; Lay, T; Myers, S C; Pasyanos, M E; Richards, P; Rodgers, A J; Walter, W R; Zucca, J J
2008-02-11
The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes a path by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas.
Poisel, R.; Preh, A.; Hofmann, R.; Schiffer, M.; Sausgruber, Th.
2009-04-01
A rock slide on to the clayey - silty - sandy - pebbly masses in the Gschliefgraben (Upper Austria province, Lake Traunsee) having occurred in 2006 as well as the humid autumn of 2007 triggered an earth flow comprising a volume up to 5 mill m³ and moving with a maximum displacement velocity of 5 m/day during the winter of 2007-2008. The possible damage was estimated up to 60 mill € due to possible destruction of houses and of a road to a settlement with heavy tourism. Exploratory drillings revealed that the moving mass consists of an alternate bedding of thicker, less permeable clayey - silty layers and thinner, more permeable silty - sandy - pebbly layers. The movement front ran ahead in the creek bed. Therefore it was assumed that water played an important role and the earth flow moved due to soaking of water into the ground from the area of the rock slide downslope. Inclinometer measurements showed that the uppermost, less permeable layer was sliding on a thin, more permeable layer. The movement process was analysed by numerical models (FLAC) and by conventional calculations in order to assess the hazard. The coupled flow and mechanical models showed that sections of the less permeable layer soaked with water were sliding on the thin, more permeable layer due to excessive watering out of the more permeable layer. These sections were thrust over the downward lying, less soaked areas, therefore having higher strength. The material thrust over the downward lying, less soaked areas together with the moving front of pore water pressures caused the downward material to fail and to be thrust over the downslope lying material in a distance of some 50 m. Thus a cyclic process was created without any indication of a sudden sliding of the complete less permeable layer. Nevertheless, the inhabitants of 15 houses had to be evacuated for safety reasons. They could return to their homes after displacement velocities had decreased. Displacement monitoring by GPS showed that
Directory of Open Access Journals (Sweden)
Daniel Asare-Kyei
2015-07-01
Full Text Available Robust risk assessment requires accurate flood intensity area mapping to allow for the identification of populations and elements at risk. However, available flood maps in West Africa lack spatial variability while global datasets have resolutions too coarse to be relevant for local scale risk assessment. Consequently, local disaster managers are forced to use traditional methods such as watermarks on buildings and media reports to identify flood hazard areas. In this study, remote sensing and Geographic Information System (GIS techniques were combined with hydrological and statistical models to delineate the spatial limits of flood hazard zones in selected communities in Ghana, Burkina Faso and Benin. The approach involves estimating peak runoff concentrations at different elevations and then applying statistical methods to develop a Flood Hazard Index (FHI. Results show that about half of the study areas fall into high intensity flood zones. Empirical validation using statistical confusion matrix and the principles of Participatory GIS show that flood hazard areas could be mapped at an accuracy ranging from 77% to 81%. This was supported with local expert knowledge which accurately classified 79% of communities deemed to be highly susceptible to flood hazard. The results will assist disaster managers to reduce the risk to flood disasters at the community level where risk outcomes are first materialized.
Casellas, J; Bach, R
2012-06-01
Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.
Seismic hazard in the eastern United States
Mueller, Charles; Boyd, Oliver; Petersen, Mark D.; Moschetti, Morgan P.; Rezaeian, Sanaz; Shumway, Allison
2015-01-01
The U.S. Geological Survey seismic hazard maps for the central and eastern United States were updated in 2014. We analyze results and changes for the eastern part of the region. Ratio maps are presented, along with tables of ground motions and deaggregations for selected cities. The Charleston fault model was revised, and a new fault source for Charlevoix was added. Background seismicity sources utilized an updated catalog, revised completeness and recurrence models, and a new adaptive smoothing procedure. Maximum-magnitude models and ground motion models were also updated. Broad, regional hazard reductions of 5%–20% are mostly attributed to new ground motion models with stronger near-source attenuation. The revised Charleston fault geometry redistributes local hazard, and the new Charlevoix source increases hazard in northern New England. Strong increases in mid- to high-frequency hazard at some locations—for example, southern New Hampshire, central Virginia, and eastern Tennessee—are attributed to updated catalogs and/or smoothing.
Optimal energy-utilization ratio for long-distance cruising of a model fish
Liu, Geng; Yu, Yong-Liang; Tong, Bing-Gang
2012-07-01
The efficiency of total energy utilization and its optimization for long-distance migration of fish have attracted much attention in the past. This paper presents theoretical and computational research, clarifying the above well-known classic questions. Here, we specify the energy-utilization ratio (fη) as a scale of cruising efficiency, which consists of the swimming speed over the sum of the standard metabolic rate and the energy consumption rate of muscle activities per unit mass. Theoretical formulation of the function fη is made and it is shown that based on a basic dimensional analysis, the main dimensionless parameters for our simplified model are the Reynolds number (Re) and the dimensionless quantity of the standard metabolic rate per unit mass (Rpm). The swimming speed and the hydrodynamic power output in various conditions can be computed by solving the coupled Navier-Stokes equations and the fish locomotion dynamic equations. Again, the energy consumption rate of muscle activities can be estimated by the quotient of dividing the hydrodynamic power by the muscle efficiency studied by previous researchers. The present results show the following: (1) When the value of fη attains a maximum, the dimensionless parameter Rpm keeps almost constant for the same fish species in different sizes. (2) In the above cases, the tail beat period is an exponential function of the fish body length when cruising is optimal, e.g., the optimal tail beat period of Sockeye salmon is approximately proportional to the body length to the power of 0.78. Again, the larger fish's ability of long-distance cruising is more excellent than that of smaller fish. (3) The optimal swimming speed we obtained is consistent with previous researchers’ estimations.
Roulleau, Louise; Bétard, François; Carlier, Benoît; Lissak, Candide; Fort, Monique
2016-04-01
Landslides are common natural hazards in the Southern French Alps, where they may affect human lives and cause severe damages to infrastructures. As a part of the SAMCO research project dedicated to risk evaluation in mountain areas, this study focuses on the Guil river catchment (317 km2), Queyras, to assess landslide hazard poorly studied until now. In that area, landslides are mainly occasional, low amplitude phenomena, with limited direct impacts when compared to other hazards such as floods or snow avalanches. However, when interacting with floods during extreme rainfall events, landslides may have indirect consequences of greater importance because of strong hillslope-channel connectivity along the Guil River and its tributaries (i.e. positive feedbacks). This specific morphodynamic functioning reinforces the need to have a better understanding of landslide hazards and their spatial distribution at the catchment scale to prevent local population from disasters with multi-hazard origin. The aim of this study is to produce a landslide susceptibility mapping at 1:50 000 scale as a first step towards global estimation of landslide hazard and risk. The three main methodologies used for assessing landslide susceptibility are qualitative (i.e. expert opinion), deterministic (i.e. physics-based models) and statistical methods (i.e. probabilistic models). Due to the rapid development of geographical information systems (GIS) during the last two decades, statistical methods are today widely used because they offer a greater objectivity and reproducibility at large scales. Among them, multivariate analyses are considered as the most robust techniques, especially the logistic regression method commonly used in landslide susceptibility mapping. However, this method like others is strongly dependent on the accuracy of the input data to avoid significant errors in the final results. In particular, a complete and accurate landslide inventory is required before the modelling
Zan, Xinxing Anna; Yoon, Sang Won; Khasawneh, Mohammad; Srihari, Krishnaswami
2013-01-01
In an effort to develop a low-cost and user-friendly forecasting model to minimize forecasting error, we have applied average and exponentially weighted return ratios to project undergraduate student enrollment. We tested the proposed forecasting models with different sets of historical enrollment data, such as university-, school-, and…
R.W. Strachan (Rodney); H.K. van Dijk (Herman)
2008-01-01
textabstractA Bayesian model averaging procedure is presented that makes use of a finite mixture of many model structures within the class of vector autoregressive (VAR) processes. It is applied to two empirical issues. First, stability of the Great Ratios in U.S. macro-economic time series is
Mullens, E.; Mcpherson, R. A.
2016-12-01
This work develops detailed trends in climate hazards affecting the Department of Transportation's Region 6, in the South Central U.S. Firstly, a survey was developed to gather information regarding weather and climate hazards in the region from the transportation community, identifying key phenomena and thresholds to evaluate. Statistically downscaled datasets were obtained from the Multivariate Adaptive Constructed Analogues (MACA) project, and the Asynchronous Regional Regression Model (ARRM), for a total of 21 model projections, two coupled model intercomparisons (CMIP3, and CMIP5), and four emissions pathways (A1Fi, B1, RCP8.5, RCP4.5). Specific hazards investigated include winter weather, freeze-thaw cycles, hot and cold extremes, and heavy precipitation. Projections for each of these variables were calculated for the region, utilizing spatial mapping, and time series analysis at the climate division level. The results indicate that cold-season phenomena such as winter weather, freeze-thaw, and cold extremes, decrease in intensity and frequency, particularly with the higher emissions pathways. Nonetheless, specific model and downscaling method yields variability in magnitudes, with the most notable decreasing trends late in the 21st century. Hot days show a pronounced increase, particularly with greater emissions, producing annual mean 100oF day frequencies by late 21st century analogous to the 2011 heatwave over the central Southern Plains. Heavy precipitation, evidenced by return period estimates and counts-over-thresholds, also show notable increasing trends, particularly between the recent past through mid-21st Century. Conversely, mean precipitation does not show significant trends and is regionally variable. Precipitation hazards (e.g., winter weather, extremes) diverge between downscaling methods and their associated model samples much more substantially than temperature, suggesting that the choice of global model and downscaled data is particularly
Performances of the likelihood-ratio classifier based on different data modelings
Chen, C.; Veldhuis, Raymond N.J.
2008-01-01
The classical likelihood ratio classifier easily collapses in many biometric applications especially with independent training-test subjects. The reason lies in the inaccurate estimation of the underlying user-specific feature density. Firstly, the feature density estimation suffers from
A Multinomial Model of Fertility Choice and Offspring Sex-Ratios in India
Rubiana Chamarbagwala; Martin Ranger
2007-01-01
Fertility decline in developing countries may have unexpected demographic consequences. Although lower fertility improves nutrition, health, and human capital investments for surviving children, little is known about the relationship between fertility outcomes and female-male offspring sex-ratios. Particularly in countries with a cultural preference for sons, like India and China, fertility decline may deteriorate the already imbalanced sex-ratios. We use the fertility histories of over 90,00...
Shair, Syazreen Niza; Yusof, Aida Yuzi; Asmuni, Nurin Haniah
2017-05-01
Coherent mortality forecasting models have recently received increasing attention particularly in their application to sub-populations. The advantage of coherent models over independent models is the ability to forecast a non-divergent mortality for two or more sub-populations. One of the coherent models was recently developed by [1] known as the product-ratio model. This model is an extension version of the functional independent model from [2]. The product-ratio model has been applied in a developed country, Australia [1] and has been extended in a developing nation, Malaysia [3]. While [3] accounted for coherency of mortality rates between gender and ethnic group, the coherency between states in Malaysia has never been explored. This paper will forecast the mortality rates of Malaysian sub-populations according to states using the product ratio coherent model and its independent version— the functional independent model. The forecast accuracies of two different models are evaluated using the out-of-sample error measurements— the mean absolute forecast error (MAFE) for age-specific death rates and the mean forecast error (MFE) for the life expectancy at birth. We employ Malaysian mortality time series data from 1991 to 2014, segregated by age, gender and states.
Cao, Bibo; Li, Chuan; Liu, Yan; Zhao, Yue; Sha, Jian; Wang, Yuqiu
2015-05-01
Because water quality monitoring sections or sites could reflect the water quality status of rivers, surface water quality management based on water quality monitoring sections or sites would be effective. For the purpose of improving water quality of rivers, quantifying the contribution ratios of pollutant resources to a specific section is necessary. Because physical and chemical processes of nutrient pollutants are complex in water bodies, it is difficult to quantitatively compute the contribution ratios. However, water quality models have proved to be effective tools to estimate surface water quality. In this project, an enhanced QUAL2Kw model with an added module was applied to the Xin'anjiang Watershed, to obtain water quality information along the river and to assess the contribution ratios of each pollutant source to a certain section (the Jiekou state-controlled section). Model validation indicated that the results were reliable. Then, contribution ratios were analyzed through the added module. Results show that among the pollutant sources, the Lianjiang tributary contributes the largest part of total nitrogen (50.43%), total phosphorus (45.60%), ammonia nitrogen (32.90%), nitrate (nitrite + nitrate) nitrogen (47.73%), and organic nitrogen (37.87%). Furthermore, contribution ratios in different reaches varied along the river. Compared with pollutant loads ratios of different sources in the watershed, an analysis of contribution ratios of pollutant sources for each specific section, which takes the localized chemical and physical processes into consideration, was more suitable for local-regional water quality management. In summary, this method of analyzing the contribution ratios of pollutant sources to a specific section based on the QUAL2Kw model was found to support the improvement of the local environment.
Hazard function theory for nonstationary natural hazards
Read, Laura K.; Vogel, Richard M.
2016-04-01
Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.
Directory of Open Access Journals (Sweden)
Rasool Mahdavi Najafabadi
2016-01-01
Full Text Available In this paper, among multi-criteria models for complex decision-making and multiple-attribute models for assigning the most preferable choice, the technique for order preference by similarity ideal solution (TOPSIS is implied. The main objective of this research is to identify potential natural hazards in Bandar Abbas city, Iran, using TOPSIS model, which is based on an analytical hierarchy process structure. A set of 12 relevant geomorphologic parameters, including earthquake frequency, distance from the earthquake epicentre, number of faults, flood, talus creep, landslide, land subsidence, tide, hurricane and tidal wave, dust storms with external source, wind erosion and sea level fluctuations are considered to quantify inputs of the model. The outputs of this study indicate that one region, among three assessed regions, has the maximum potential occurrence of natural hazards, while it has been urbanized at a greater rate compared to other regions. Furthermore, based on Delphi method, the earthquake frequency and the landslide are the most and the least dangerous phenomena, respectively.
Gay, Emilie; Senoussi, Rachid; Barnouin, Jacques
2007-01-01
Methods for spatial cluster detection dealing with diseases quantified by continuous variables are few, whereas several diseases are better approached by continuous indicators. For example, subclinical mastitis of the dairy cow is evaluated using a continuous marker of udder inflammation, the somatic cell score (SCS). Consequently, this study proposed to analyze spatialized risk and cluster components of herd SCS through a new method based on a spatial hazard model. The dataset included annual SCS for 34 142 French dairy herds for the year 2000, and important SCS risk factors: mean parity, percentage of winter and spring calvings, and herd size. The model allowed the simultaneous estimation of the effects of known risk factors and of potential spatial clusters on SCS, and the mapping of the estimated clusters and their range. Mean parity and winter and spring calvings were significantly associated with subclinical mastitis risk. The model with the presence of 3 clusters was highly significant, and the 3 clusters were attractive, i.e. closeness to cluster center increased the occurrence of high SCS. The three localizations were the following: close to the city of Troyes in the northeast of France; around the city of Limoges in the center-west; and in the southwest close to the city of Tarbes. The semi-parametric method based on spatial hazard modeling applies to continuous variables, and takes account of both risk factors and potential heterogeneity of the background population. This tool allows a quantitative detection but assumes a spatially specified form for clusters.
Rapid SAR and GPS Measurements and Models for Hazard Science and Situational Awareness
Owen, S. E.; Yun, S. H.; Hua, H.; Agram, P. S.; Liu, Z.; Moore, A. W.; Rosen, P. A.; Simons, M.; Webb, F.; Linick, J.; Fielding, E. J.; Lundgren, P.; Sacco, G. F.; Polet, J.; Manipon, G.
2016-12-01
The Advanced Rapid Imaging and Analysis (ARIA) project for Natural Hazards is focused on rapidly generating higher level geodetic imaging products and placing them in the hands of the solid earth science and local, national, and international natural hazard communities by providing science product generation, exploration, and delivery capabilities at an operational level. Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR), Differential Global Positioning System (DGPS), SAR-based change detection, and image pixel tracking have recently become critical additions to our toolset for understanding and mapping the damage caused by earthquakes, volcanic eruptions, landslides, and floods. Analyses of these data sets are still largely handcrafted following each event and are not generated rapidly and reliably enough for response to natural disasters or for timely analysis of large data sets. The ARIA project, a joint venture co-sponsored by California Institute of Technology (Caltech) and by NASA through the Jet Propulsion Laboratory (JPL), has been capturing the knowledge applied to these responses and building it into an automated infrastructure to generate imaging products in near real-time that can improve situational awareness for disaster response. In addition, the ARIA project is developing the capabilities to provide automated imaging and analysis capabilities necessary to keep up with the imminent increase in raw data from geodetic imaging missions planned for launch by NASA, as well as international space agencies. We will present the progress we have made on automating the analysis of SAR data for hazard monitoring and response using data from Sentinel 1a/b as well as continuous GPS stations. Since the beginning of our project, our team has imaged events and generated response products for events around the world. These response products have enabled many conversations with those in the disaster response community
International Nuclear Information System (INIS)
Sugino, Hideharu; Onizawa, Kunio; Suzuki, Masahide
2005-09-01
To establish the reliability evaluation method for aged structural component, we developed a probabilistic seismic hazard evaluation code SHEAT-FM (Seismic Hazard Evaluation for Assessing the Threat to a facility site - Fault Model) using a seismic motion prediction method based on fault model. In order to improve the seismic hazard evaluation, this code takes the latest knowledge in the field of earthquake engineering into account. For example, the code involves a group delay time of observed records and an update process model of active fault. This report describes the user's guide of SHEAT-FM, including the outline of the seismic hazard evaluation, specification of input data, sample problem for a model site, system information and execution method. (author)
Gupta, Manoj; Gupta, T C
2017-10-01
The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.
Performance analysis of wind turbines at low tip-speed ratio using the Betz-Goldstein model
International Nuclear Information System (INIS)
Vaz, Jerson R.P.; Wood, David H.
2016-01-01
Highlights: • General formulations for power and thrust at any tip-speed ratio are developed. • The Joukowsky model for the blades is modified with specific vortex distributions. • Betz-Goldstein model is shown to be the most consistent at low tip-speed ratio. • The effects of finite blade number are assessed using tip loss factors. • Tip loss for finite blade number may complicate the vortex breakdown. - Abstract: Analyzing wind turbine performance at low tip-speed ratio is challenging due to the relatively high level of swirl in the wake. This work presents a new approach to wind turbine analysis including swirl for any tip-speed ratio. The methodology uses the induced velocity field from vortex theory in the general momentum theory, in the form of the turbine thrust and torque equations. Using the constant bound circulation model of Joukowsky, the swirl velocity becomes infinite on the wake centreline even at high tip-speed ratio. Rankine, Vatistas and Delery vortices were used to regularize the Joukowsky model near the centreline. The new formulation prevents the power coefficient from exceeding the Betz-Joukowsky limit. An alternative calculation, based on the varying circulation for Betz-Goldstein optimized rotors is shown to have the best general behavior. Prandtl’s approximation for the tip loss and a recent alternative were employed to account for the effects of a finite number of blades. The Betz-Goldstein model appears to be the only one resistant to vortex breakdown immediately behind the rotor for an infinite number of blades. Furthermore, the dependence of the induced velocity on radius in the Betz-Goldstein model allows the power coefficient to remain below Betz-Joukowsky limit which does not occur for the Joukowsky model at low tip-speed ratio.
Energy Technology Data Exchange (ETDEWEB)
Plesko, Catherine S [Los Alamos National Laboratory; Clement, R Ryan [Los Alamos National Laboratory; Weaver, Robert P [Los Alamos National Laboratory; Bradley, Paul A [Los Alamos National Laboratory; Huebner, Walter F [Los Alamos National Laboratory
2009-01-01
The mitigation of impact hazards resulting from Earth-approaching asteroids and comets has received much attention in the popular press. However, many questions remain about the near-term and long-term, feasibility and appropriate application of all proposed methods. Recent and ongoing ground- and space-based observations of small solar-system body composition and dynamics have revolutionized our understanding of these bodies (e.g., Ryan (2000), Fujiwara et al. (2006), and Jedicke et al. (2006)). Ongoing increases in computing power and algorithm sophistication make it possible to calculate the response of these inhomogeneous objects to proposed mitigation techniques. Here we present the first phase of a comprehensive hazard mitigation planning effort undertaken by Southwest Research Institute and Los Alamos National Laboratory. We begin by reviewing the parameter space of the object's physical and chemical composition and trajectory. We then use the radiation hydrocode RAGE (Gittings et al. 2008), Monte Carlo N-Particle (MCNP) radiation transport (see Clement et al., this conference), and N-body dynamics codes to explore the effects these variations in object properties have on the coupling of energy into the object from a variety of mitigation techniques, including deflection and disruption by nuclear and conventional munitions, and a kinetic impactor.
McAnulty, Michael J; Yen, Jiun Y; Freedman, Benjamin G; Senger, Ryan S
2012-05-14
Genome-scale metabolic networks and flux models are an effective platform for linking an organism genotype to its phenotype. However, few modeling approaches offer predictive capabilities to evaluate potential metabolic engineering strategies in silico. A new method called "flux balance analysis with flux ratios (FBrAtio)" was developed in this research and applied to a new genome-scale model of Clostridium acetobutylicum ATCC 824 (iCAC490) that contains 707 metabolites and 794 reactions. FBrAtio was used to model wild-type metabolism and metabolically engineered strains of C. acetobutylicum where only flux ratio constraints and thermodynamic reversibility of reactions were required. The FBrAtio approach allowed solutions to be found through standard linear programming. Five flux ratio constraints were required to achieve a qualitative picture of wild-type metabolism for C. acetobutylicum for the production of: (i) acetate, (ii) lactate, (iii) butyrate, (iv) acetone, (v) butanol, (vi) ethanol, (vii) CO2 and (viii) H2. Results of this simulation study coincide with published experimental results and show the knockdown of the acetoacetyl-CoA transferase increases butanol to acetone selectivity, while the simultaneous over-expression of the aldehyde/alcohol dehydrogenase greatly increases ethanol production. FBrAtio is a promising new method for constraining genome-scale models using internal flux ratios. The method was effective for modeling wild-type and engineered strains of C. acetobutylicum.
Friedel, Michael J.
2011-01-01
Few studies attempt to model the range of possible post-fire hydrologic and geomorphic hazards because of the sparseness of data and the coupled, nonlinear, spatial, and temporal relationships among landscape variables. In this study, a type of unsupervised artificial neural network, called a self-organized map (SOM), is trained using data from 540 burned basins in the western United States. The sparsely populated data set includes variables from independent numerical landscape categories (climate, land surface form, geologic texture, and post-fire condition), independent landscape classes (bedrock geology and state), and dependent initiation processes (runoff, landslide, and runoff and landslide combination) and responses (debris flows, floods, and no events). Pattern analysis of the SOM-based component planes is used to identify and interpret relations among the variables. Application of the Davies-Bouldin criteria following k-means clustering of the SOM neurons identified eight conceptual regional models for focusing future research and empirical model development. A split-sample validation on 60 independent basins (not included in the training) indicates that simultaneous predictions of initiation process and response types are at least 78% accurate. As climate shifts from wet to dry conditions, forecasts across the burned landscape reveal a decreasing trend in the total number of debris flow, flood, and runoff events with considerable variability among individual basins. These findings suggest the SOM may be useful in forecasting real-time post-fire hazards, and long-term post-recovery processes and effects of climate change scenarios.
International Nuclear Information System (INIS)
Rausch, L.
1979-01-01
On a scientific basis and with the aid of realistic examples, the author gives a popular introduction to an understanding and judgment of the public discussion over radiation hazards: Uses and hazards of X-ray examinations, biological radiation effects, civilisation risks in comparison, origins and explanation of radiation protection regulations. (orig.) [de
CalTOX, a multimedia total exposure model for hazardous-waste sites; Part 1, Executive summary
Energy Technology Data Exchange (ETDEWEB)
McKone, T.E.
1993-06-01
CalTOX has been developed as a spreadsheet model to assist in health-risk assessments that address contaminated soils and the contamination of adjacent air, surface water, sediments, and ground water. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify and reduce uncertainty in multimedia, multiple-pathway exposure models. This report provides an overview of the CalTOX model components, lists the objectives of the model, describes the philosophy under which the model was developed, identifies the chemical classes for which the model can be used, and describes critical sensitivities and uncertainties. The multimedia transport and transformation model is a dynamic model that can be used to assess time-varying concentrations of contaminants introduced initially to soil layers or for contaminants released continuously to air or water. This model assists the user in examining how chemical and landscape properties impact both the ultimate route and quantity of human contact. Multimedia, multiple pathway exposure models are used in the CalTOX model to estimate average daily potential doses within a human population in the vicinity of a hazardous substances release site. The exposure models encompass twenty-three exposure pathways. The exposure assessment process consists of relating contaminant concentrations in the multimedia model compartments to contaminant concentrations in the media with which a human population has contact (personal air, tap water, foods, household dusts soils, etc.). The average daily dose is the product of the exposure concentrations in these contact media and an intake or uptake factor that relates the concentrations to the distributions of potential dose within the population.
Enzenhoefer, R.; Binning, P. J.; Nowak, W.
2015-09-01
Risk is often defined as the product of probability, vulnerability and value. Drinking water supply from groundwater abstraction is often at risk due to multiple hazardous land use activities in the well catchment. Each hazard might or might not introduce contaminants into the subsurface at any point in time, which then affects the pumped quality upon transport through the aquifer. In such situations, estimating the overall risk is not trivial, and three key questions emerge: (1) How to aggregate the impacts from different contaminants and spill locations to an overall, cumulative impact on the value at risk? (2) How to properly account for the stochastic nature of spill events when converting the aggregated impact to a risk estimate? (3) How will the overall risk and subsequent decision making depend on stakeholder objectives, where stakeholder objectives refer to the values at risk, risk attitudes and risk metrics that can vary between stakeholders. In this study, we provide a STakeholder-Objective Risk Model (STORM) for assessing the total aggregated risk. Or concept is a quantitative, probabilistic and modular framework for simulation-based risk estimation. It rests on the source-pathway-receptor concept, mass-discharge-based aggregation of stochastically occuring spill events, accounts for uncertainties in the involved flow and transport models through Monte Carlo simulation, and can address different stakeholder objectives. We illustrate the application of STORM in a numerical test case inspired by a German drinking water catchment. As one may expect, the results depend strongly on the chosen stakeholder objectives, but they are equally sensitive to different approaches for risk aggregation across different hazards, contaminant types, and over time.
Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.
2017-12-01
Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in
Arsad, Roslah; Shaari, Siti Nabilah Mohd; Isa, Zaidi
2017-11-01
Determining stock performance using financial ratio is challenging for many investors and researchers. Financial ratio can indicate the strengths and weaknesses of a company's stock performance. There are five categories of financial ratios namely liquidity, efficiency, leverage, profitability and market ratios. It is important to interpret the ratio correctly for proper financial decision making. The purpose of this study is to compare the performance of listed companies in Bursa Malaysia using Data Envelopment Analysis (DEA) and DuPont analysis Models. The study is conducted in 2015 involving 116 consumer products companies listed in Bursa Malaysia. The estimation method of Data Envelopment Analysis computes the efficiency scores and ranks the companies accordingly. The Alirezaee and Afsharian's method of analysis based Charnes, Cooper and Rhodes (CCR) where Constant Return to Scale (CRS) is employed. The DuPont analysis is a traditional tool for measuring the operating performance of companies. In this study, DuPont analysis is used to evaluate three different aspects such as profitability, efficiency of assets utilization and financial leverage. Return on Equity (ROE) is also calculated in DuPont analysis. This study finds that both analysis models provide different rankings of the selected samples. Hypothesis testing based on Pearson's correlation, indicates that there is no correlation between rankings produced by DEA and DuPont analysis. The DEA ranking model proposed by Alirezaee and Asharian is unstable. The method cannot provide complete ranking because the values of Balance Index is equal and zero.
Confidence intervals for the first crossing point of two hazard functions.
Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng
2009-12-01
The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.
International Nuclear Information System (INIS)
Podsiadlo, Antoni; Tarelko, Wieslaw
2006-01-01
The most dangerous places in ships are their power plants. Particularly, they are very unsafe for operators carried out various necessary operation and maintenance activities. For this reason, ship machinery should be designed to ensure the maximum safety for its operators. It is a very difficult task. Therefore, it could not be solved by means of conventional design methods, which are used for design of uncomplicated technical equipment. One of the possible ways of solving this problem is to provide appropriate tools, which allow us to take the operator's safety into account during a design process, especially at its early stages. A computer-aided system supporting design of safe ship power plants could be such a tool. This paper deals with developing process of a prototype of the computer-aided system for hazard zone identification in ship power plants
Energy Technology Data Exchange (ETDEWEB)
Podsiadlo, Antoni [Department of Engineering Sciences, Gdynia Maritime University, ul. Morska 83, 81-225 Gdynia (Poland)]. E-mail: topo@am.gdynia.pl; Tarelko, Wieslaw [Department of Engineering Sciences, Gdynia Maritime University, ul. Morska 83, 81-225 Gdynia (Poland)]. E-mail: tar@am.gdynia.pl
2006-04-15
The most dangerous places in ships are their power plants. Particularly, they are very unsafe for operators carried out various necessary operation and maintenance activities. For this reason, ship machinery should be designed to ensure the maximum safety for its operators. It is a very difficult task. Therefore, it could not be solved by means of conventional design methods, which are used for design of uncomplicated technical equipment. One of the possible ways of solving this problem is to provide appropriate tools, which allow us to take the operator's safety into account during a design process, especially at its early stages. A computer-aided system supporting design of safe ship power plants could be such a tool. This paper deals with developing process of a prototype of the computer-aided system for hazard zone identification in ship power plants.
International Nuclear Information System (INIS)
McKone, T.E.
1994-01-01
Risk assessment is a quantitative evaluation of information on potential health hazards of environmental contaminants and the extent of human exposure to these contaminants. As applied to toxic chemical emissions to air, risk assessment involves four interrelated steps. These are (1) determination of source concentrations or emission characteristics, (2) exposure assessment, (3) toxicity assessment, and (4) risk characterization. These steps can be carried out with assistance from analytical models in order to estimate the potential risk associated with existing and future releases. CAirTOX has been developed as a spreadsheet model to assist in making these types of calculations. CAirTOX follows an approach that has been incorporated into the CalTOX model, which was developed for the California Department of Toxic Substances Control, With CAirTOX, we can address how contaminants released to an air basin can lead to contamination of soil, food, surface water, and sediments. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify uncertainty in multimedia, multiple-pathway exposure assessments. The capacity to explicitly address uncertainty has been incorporated into the model in two ways. First, the spreadsheet form of the model makes it compatible with Monte-Carlo add-on programs that are available for uncertainty analysis. Second, all model inputs are specified in terms of an arithmetic mean and coefficient of variation so that uncertainty analyses can be carried out
Göransson, Mona; Magnusson, Asa; Heilig, Markus
2006-01-01
It has been repeatedly demonstrated that hazardous alcohol use during pregnancy is rarely detected in regular antenatal care, and that detection can be markedly improved using systematic screening. A major challenge is to translate research-based strategies into regular antenatal care. Here, we examined whether a screening strategy using the Alcohol Use Disorder Test (AUDIT) and time-line follow-back (TLFB) could be implemented under naturalistic conditions and within available resources; and whether it would improve detection to the extent previously shown in a research context. Regular midwives at a large antenatal care clinic were randomized to receive brief training and then implement AUDIT and TLFB ("intervention"); or to a waiting-list control group continuing to deliver regular care ("control"). In the intervention-condition, AUDIT was used to collect data about alcohol use during the year preceding pregnancy, and TLFB to assess actual consumption during the first trimester. Data were collected from new admissions over 6 months. Drop out was higher among patients of the intervention group than control midwives, 14% (23/162) versus 0% (0/153), and ppregnancy i.e. AUDIT score 6 or higher (17%, 23/139), and patients with ongoing consumption exceeding 70 g/week and/or binge consumption according to TLFB (17%, 24/139), to a significantly higher degree than regular antenatal screening (0/162). The AUDIT- and TLFB-positive populations overlapped partially, with 36/139 subjects screening positive with either of the instrument and 11/139 were positive for both. We confirm previous findings that alcohol use during pregnancy is more extensive in Sweden than has generally been realized. Systematic screening using AUDIT and TLFB detects hazardous use in a manner which regular antenatal care does not. This remains true under naturalistic conditions, following minimal training of regular antenatal care staff, and can be achieved with minimal resources. The proposed
Marine natural hazards in coastal zone: observations, analysis and modelling (Plinius Medal Lecture)
Didenkulova, Ira
2010-05-01
Giant surface waves approaching the coast frequently cause extensive coastal flooding, destruction of coastal constructions and loss of lives. Such waves can be generated by various phenomena: strong storms and cyclones, underwater earthquakes, high-speed ferries, aerial and submarine landslides. The most famous examples of such events are the catastrophic tsunami in the Indian Ocean, which occurred on 26 December 2004 and hurricane Katrina (28 August 2005) in the Atlantic Ocean. The huge storm in the Baltic Sea on 9 January 2005, which produced unexpectedly long waves in many areas of the Baltic Sea and the influence of unusually high surge created by long waves from high-speed ferries, should also be mentioned as examples of regional marine natural hazards connected with extensive runup of certain types of waves. The processes of wave shoaling and runup for all these different marine natural hazards (tsunami, coastal freak waves, ship waves) are studied based on rigorous solutions of nonlinear shallow-water theory. The key and novel results presented here are: i) parameterization of basic formulas for extreme runup characteristics for bell-shape waves, showing that they weakly depend on the initial wave shape, which is usually unknown in real sea conditions; ii) runup analysis of periodic asymmetric waves with a steep front, as such waves are penetrating inland over large distances and with larger velocities than symmetric waves; iii) statistical analysis of irregular wave runup demonstrating that wave nonlinearity nearshore does not influence on the probability distribution of the velocity of the moving shoreline and its moments, and influences on the vertical displacement of the moving shoreline (runup). Wave runup on convex beaches and in narrow bays, which allow abnormal wave amplification is also discussed. Described analytical results are used for explanation of observed extreme runup of tsunami, freak (sneaker) waves and ship waves on different coasts
The photometric evolution of dissolving star clusters. II. Realistic models. Colours and M/L ratios
Anders, P.; Lamers, H.J.G.L.M.; BAumgardt, H.
2009-01-01
Evolutionary synthesis models are the primary means of constructing spectrophotometric models of stellar populations, and deriving physical parameters from observations compared with these models. One of the basic assumptions of evolutionary synthesis models has been the time-independence of the
Implementation of NGA-West2 ground motion models in the 2014 U.S. National Seismic Hazard Maps
Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Harmsen, Stephen C.; Frankel, Arthur D.
2014-01-01
The U.S. National Seismic Hazard Maps (NSHMs) have been an important component of seismic design regulations in the United States for the past several decades. These maps present earthquake ground shaking intensities at specified probabilities of being exceeded over a 50-year time period. The previous version of the NSHMs was developed in 2008; during 2012 and 2013, scientists at the U.S. Geological Survey have been updating the maps based on their assessment of the “best available science,” resulting in the 2014 NSHMs. The update includes modifications to the seismic source models and the ground motion models (GMMs) for sites across the conterminous United States. This paper focuses on updates in the Western United States (WUS) due to the use of new GMMs for shallow crustal earthquakes in active tectonic regions developed by the Next Generation Attenuation (NGA-West2) project. Individual GMMs, their weighted combination, and their impact on the hazard maps relative to 2008 are discussed. In general, the combined effects of lower medians and increased standard deviations in the new GMMs have caused only small changes, within 5–20%, in the probabilistic ground motions for most sites across the WUS compared to the 2008 NSHMs.
Bolck, A.; Ni, H.; Lopatka, M.
2015-01-01
Likelihood ratio (LR) models are moving into the forefront of forensic evidence evaluation as these methods are adopted by a diverse range of application areas in forensic science. We examine the fundamentally different results that can be achieved when feature- and score-based methodologies are
Molisee, D. D.; Germa, A.; Charbonnier, S. J.; Connor, C.
2017-12-01
Medicine Lake Volcano (MLV) is most voluminous of all the Cascade Volcanoes ( 600 km3), and has the highest eruption frequency after Mount St. Helens. Detailed mapping by USGS colleagues has shown that during the last 500,000 years MLV erupted >200 lava flows ranging from basalt to rhyolite, produced at least one ash-flow tuff, one caldera forming event, and at least 17 scoria cones. Underlying these units are 23 additional volcanic units that are considered to be pre-MLV in age. Despite the very high likelihood of future eruptions, fewer than 60 of 250 mapped volcanic units (MLV and pre-MLV) have been dated reliably. A robust set of eruptive ages is key to understanding the history of the MLV system and to forecasting the future behavior of the volcano. The goals of this study are to 1) obtain additional radiometric ages from stratigraphically strategic units; 2) recalculate recurrence rate of eruptions based on an augmented set of radiometric dates; and 3) use lava flow, PDC, ash fall-out, and lahar computational simulation models to assess the potential effects of discrete volcanic hazards locally and regionally. We identify undated target units (units in key stratigraphic positions to provide maximum chronological insight) and obtain field samples for radiometric dating (40Ar/39Ar and K/Ar) and petrology. Stratigraphic and radiometric data are then used together in the Volcano Event Age Model (VEAM) to identify changes in the rate and type of volcanic eruptions through time, with statistical uncertainty. These newly obtained datasets will be added to published data to build a conceptual model of volcanic hazards at MLV. Alternative conceptual models, for example, may be that the rate of MLV lava flow eruptions are nonstationary in time and/or space and/or volume. We explore the consequences of these alternative models on forecasting future eruptions. As different styles of activity have different impacts, we estimate these potential effects using simulation
Rich dynamics of a food chain model with ratio-dependent type III ...
African Journals Online (AJOL)
user
prey dynamics is to incorporate discrete delay into the predator equations. ... models may be found in classical books of Gopalsamy (1992), Macdonald (1989) .... Model (1) has 10 parameters in all, which make mathematical analysis complex.
McNamara, D. E.; Yeck, W. L.; Barnhart, W. D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, A.; Hough, S. E.; Benz, H. M.; Earle, P. S.
2017-09-01
The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard. Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10-15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.
Lee, A H; Yau, K K
2001-01-01
To identify factors associated with hospital length of stay (LOS) and to model variations in LOS within Diagnosis Related Groups (DRGs). A proportional hazards frailty modelling approach is proposed that accounts for patient transfers and the inherent correlation of patients clustered within hospitals. The investigation is based on patient discharge data extracted for a group of obstetrical DRGs. Application of the frailty approach has highlighted several significant factors after adjustment for patient casemix and random hospital effects. In particular, patients admitted for childbirth with private medical insurance coverage have higher risk of prolonged hospitalization compared to public patients. The determination of pertinent factors provides important information to hospital management and clinicians in assessing the risk of prolonged hospitalization. The analysis also enables the comparison of inter-hospital variations across adjacent DRGs.
... substances that could harm human health or the environment. Hazardous means dangerous, so these materials must be ... M. is also a founding member of Hi-Ethics and subscribes to the principles of the Health ...
International Nuclear Information System (INIS)
Powers, J.
1991-01-01
A number of terms (e.g., ''hazardous chemicals,'' ''hazardous materials,'' ''hazardous waste,'' and similar nomenclature) refer to substances that are subject to regulation under one or more federal environmental laws. State laws and regulations also provide additional, similar, or identical terminology that may be confused with the federally defined terms. Many of these terms appear synonymous, and it easy to use them interchangeably. However, in a regulatory context, inappropriate use of narrowly defined terms can lead to confusion about the substances referred to, the statutory provisions that apply, and the regulatory requirements for compliance under the applicable federal statutes. This information Brief provides regulatory definitions, a brief discussion of compliance requirements, and references for the precise terminology that should be used when referring to ''hazardous'' substances regulated under federal environmental laws. A companion CERCLA Information Brief (EH-231-004/0191) addresses ''toxic'' nomenclature
DEFF Research Database (Denmark)
Grislain-Letrémy, Céline; Katossky, Arthur
2014-01-01
The willingness of households to pay for prevention against industrial risks can be revealed by real estate markets. By using very rich microdata, we study housing prices in the vicinity of hazardous industries near three important French cities. We show that the impact of hazardous plants...... to important biases in the estimated value of the impact of hazardous plants on housing values....
Computer Models Used to Support Cleanup Decision Making at Hazardous and Radioactive Waste Sites
This report is a product of the Interagency Environmental Pathway Modeling Workgroup. This report will help bring a uniform approach to solving environmental modeling problems common to site remediation and restoration efforts.
Site characterization and modeling to estimate movement of hazardous materials in groundwater
International Nuclear Information System (INIS)
Ditmars, J.D.
1988-01-01
A quantitative approach for evaluating the effectiveness of site characterization measurement activities is developed and illustrated with an example application to hypothetical measurement schemes at a potential geologic repository site for radioactive waste. The method is a general one and could also be applied at sites for underground disposal of hazardous chemicals. The approach presumes that measurements will be undertaken to support predictions of the performance of some aspect of a constructed facility or natural system. It requires a quantitative performance objective, such as groundwater travel time or contaminant concentration, against which to compare predictions of performance. The approach recognizes that such predictions are uncertain because the measurements upon which they are based are uncertain. The effectiveness of measurement activities is quantified by a confidence index, β, that reflects the number of standard deviations separating the best estimate of performance from the perdetermined performance objective. Measurements that reduce the uncertainty in predictions lead to increased values of β. The link between measurement and prediction uncertainties, required for the evaluation of β for a particular measurement scheme, identifies the measured quantities that significantly affect prediction uncertainty. The components of uncertainty in those key measurements are spatial variation, noise, estimation error, and measurement bias. 7 refs., 4 figs
Centers for Disease Control (CDC) Podcasts
Chemicals are a part of our daily lives, providing many products and modern conveniences. With more than three decades of experience, The Centers for Disease Control and Prevention (CDC) has been in the forefront of efforts to protect and assess people's exposure to environmental and hazardous chemicals. This report provides information about hazardous chemicals and useful tips on how to protect you and your family from harmful exposure.
International Nuclear Information System (INIS)
Khan, M.A.
1992-01-01
Welding technology is advancing rapidly in the developed countries and has converted into a science. Welding involving the use of electricity include resistance welding. Welding shops are opened in residential area, which was causing safety hazards, particularly the teenagers and children who eagerly see the welding arc with their naked eyes. There are radiation hazards from ultra violet rays which irritate the skin, eye irritation. Welding arc light of such intensity could damage the eyes. (Orig./A.B.)
Zhang, Shengyong
2017-07-01
Spot welding has been widely used for vehicle body construction due to its advantages of high speed and adaptability for automation. An effort to increase the stiffness-to-weight ratio of spot-welded structures is investigated based upon nonlinear finite element analysis. Topology optimization is conducted for reducing weight in the overlapping regions by choosing an appropriate topology. Three spot-welded models (lap, doubt-hat and T-shape) that approximate “typical” vehicle body components are studied for validating and illustrating the proposed method. It is concluded that removing underutilized material from overlapping regions can result in a significant increase in structural stiffness-to-weight ratio.
Carbon Structure Hazard Control
Yoder, Tommy; Greene, Ben; Porter, Alan
2015-01-01
Carbon composite structures are widely used in virtually all advanced technology industries for a multitude of applications. The high strength-to-weight ratio and resistance to aggressive service environments make them highly desirable. Automotive, aerospace, and petroleum industries extensively use, and will continue to use, this enabling technology. As a result of this broad range of use, field and test personnel are increasingly exposed to hazards associated with these structures. No single published document exists to address the hazards and make recommendations for the hazard controls required for the different exposure possibilities from damaged structures including airborne fibers, fly, and dust. The potential for personnel exposure varies depending on the application or manipulation of the structure. The effect of exposure to carbon hazards is not limited to personnel, protection of electronics and mechanical equipment must be considered as well. The various exposure opportunities defined in this document include pre-manufacturing fly and dust, the cured structure, manufacturing/machining, post-event cleanup, and post-event test and/or evaluation. Hazard control is defined as it is applicable or applied for the specific exposure opportunity. The carbon exposure hazard includes fly, dust, fiber (cured/uncured), and matrix vapor/thermal decomposition products. By using the recommendations in this document, a high level of confidence can be assured for the protection of personnel and equipment.
DEFF Research Database (Denmark)
Valentin, Jan B.; Andreetta, Christian; Boomsma, Wouter
2014-01-01
We propose a method to formulate probabilistic models of protein structure in atomic detail, for a given amino acid sequence, based on Bayesian principles, while retaining a close link to physics. We start from two previously developed probabilistic models of protein structure on a local length s....... The results indicate that the proposed method and the probabilistic models show considerable promise for probabilistic protein structure prediction and related applications. © 2013 Wiley Periodicals, Inc....
Directory of Open Access Journals (Sweden)
Jeremy D. Bricker
2017-02-01
Full Text Available The 2015 magnitude 7.8 Gorkha earthquake and its aftershocks weakened mountain slopes in Nepal. Co- and postseismic landsliding and the formation of landslide-dammed lakes along steeply dissected valleys were widespread, among them a landslide that dammed the Kali Gandaki River. Overtopping of the landslide dam resulted in a flash flood downstream, though casualties were prevented because of timely evacuation of low-lying areas. We hindcast the flood using the BREACH physically based dam-break model for upstream hydrograph generation, and compared the resulting maximum flow rate with those resulting from various empirical formulas and a simplified hydrograph based on published observations. Subsequent modeling of downstream flood propagation was compromised by a coarse-resolution digital elevation model with several artifacts. Thus, we used a digital-elevation-model preprocessing technique that combined carving and smoothing to derive topographic data. We then applied the 1-dimensional HEC-RAS model for downstream flood routing, and compared it to the 2-dimensional Delft-FLOW model. Simulations were validated using rectified frames of a video recorded by a resident during the flood in the village of Beni, allowing estimation of maximum flow depth and speed. Results show that hydrological smoothing is necessary when using coarse topographic data (such as SRTM or ASTER, as using raw topography underestimates flow depth and speed and overestimates flood wave arrival lag time. Results also show that the 2-dimensional model produces more accurate results than the 1-dimensional model but the 1-dimensional model generates a more conservative result and can be run in a much shorter time. Therefore, a 2-dimensional model is recommended for hazard assessment and planning, whereas a 1-dimensional model would facilitate real-time warning declaration.
Differences in the Aspect Ratio of Gold Nanorods that Induce Defects in Cell Membrane Models.
Lins, Paula M P; Marangoni, Valéria S; Uehara, Thiers M; Miranda, Paulo B; Zucolotto, Valtencir; Cancino-Bernardi, Juliana
2017-12-19
Understanding the interactions between biomolecules and nanomaterials is of great importance for many areas of nanomedicine and bioapplications. Although studies in this area have been performed, the interactions between cell membranes and nanoparticles are not fully understood. Here, we investigate the interactions that occur between the Langmuir monolayers of dipalmitoylphosphatidyl glycerol (DPPG) and dipalmitoylphosphatidyl choline (DPPC) with gold nanorods (NR)-with three aspect ratios-and gold nanoparticles. Our results showed that the aspect ratio of the NRs influenced the interactions with both monolayers, which suggest that the physical morphology and electrostatic forces govern the interactions in the DPPG-NR system, whereas the van der Waals interactions are predominant in the DPPC-NR systems. Size influences the expansion isotherms in both systems, but the lipid tails remain conformationally ordered upon expansion, which suggests phase separation between the lipids and nanomaterials at the interface. The coexistence of lipid and NP regions affects the elasticity of the monolayer. When there is coexistence between two phases, the elasticity does not reflect the lipid packaging state but depends on the elasticity of the NP islands. Therefore, the results corroborate that nanomaterials influence the packing and the phase behavior of the mimetic cell membranes. For this reason, developing a methodology to understand the membrane-nanomaterial interactions is of great importance.
Directory of Open Access Journals (Sweden)
R. Rouffaud
2017-02-01
Full Text Available Piezoelectric Single Crystals (PSC are increasingly used in the manufacture of ultrasonic transducers and in particular for linear arrays or single element transducers. Among these PSCs, according to their microstructure and poled direction, some exhibit a mm2 symmetry. The analytical expression of the electromechanical coupling coefficient for a vibration mode along the poling direction for piezoelectric rectangular bar resonator is established. It is based on the mode coupling theory and fundamental energy ratio definition of electromechanical coupling coefficients. This unified formula for mm2 symmetry class material is obtained as a function of an aspect ratio (G where the two extreme cases correspond to a thin plate (with a vibration mode characterized by the thickness coupling factor, kt and a thin bar (characterized by k33′. To optimize the k33′ value related to the thin bar design, a rotation of the crystallogaphic axis in the plane orthogonal to the poling direction is done to choose the highest value for PIN-PMN-PT single crystal. Finally, finite element calculations are performed to deduce resonance frequencies and coupling coefficients in a large range of G value to confirm developed analytical relations.
International Nuclear Information System (INIS)
Zhang Qing-Yu; Zhu Ming-Fang; Sun Dong-Ke
2017-01-01
A multicomponent multiphase (MCMP) pseudopotential lattice Boltzmann (LB) model with large liquid–gas density ratios is proposed for simulating the wetting phenomena. In the proposed model, two layers of neighboring nodes are adopted to calculate the fluid–fluid cohesion force with higher isotropy order. In addition, the different-time-step method is employed to calculate the processes of particle propagation and collision for the two fluid components with a large pseudo-particle mass contrast. It is found that the spurious current is remarkably reduced by employing the higher isotropy order calculation of the fluid–fluid cohesion force. The maximum spurious current appearing at the phase interfaces is evidently influenced by the magnitudes of fluid–fluid and fluid–solid interaction strengths, but weakly affected by the time step ratio. The density ratio analyses show that the liquid–gas density ratio is dependent on both the fluid–fluid interaction strength and the time step ratio. For the liquid–gas flow simulations without solid phase, the maximum liquid–gas density ratio achieved by the present model is higher than 1000:1. However, the obtainable maximum liquid–gas density ratio in the solid–liquid–gas system is lower. Wetting phenomena of droplets contacting smooth/rough solid surfaces and the dynamic process of liquid movement in a capillary tube are simulated to validate the proposed model in different solid–liquid–gas coexisting systems. It is shown that the simulated intrinsic contact angles of droplets on smooth surfaces are in good agreement with those predicted by the constructed LB formula that is related to Young’s equation. The apparent contact angles of droplets on rough surfaces compare reasonably well with the predictions of Cassie’s law. For the simulation of liquid movement in a capillary tube, the linear relation between the liquid–gas interface position and simulation time is observed, which is identical to
Directory of Open Access Journals (Sweden)
Mikko Niilo-Rämä
2014-06-01
Full Text Available A novel estimator for estimating the mean length of fibres is proposed for censored data observed in square shaped windows. Instead of observing the fibre lengths, we observe the ratio between the intensity estimates of minus-sampling and plus-sampling. It is well-known that both intensity estimators are biased. In the current work, we derive the ratio of these biases as a function of the mean length assuming a Boolean line segment model with exponentially distributed lengths and uniformly distributed directions. Having the observed ratio of the intensity estimators, the inverse of the derived function is suggested as a new estimator for the mean length. For this estimator, an approximation of its variance is derived. The accuracies of the approximations are evaluated by means of simulation experiments. The novel method is compared to other methods and applied to real-world industrial data from nanocellulose crystalline.
International Nuclear Information System (INIS)
Wilson, J.S.; Genant, H.K.; Hattner, R.S.; Hoffer, P.B.
1978-01-01
The ratio of late to early uptake of several radionuclides was examined as a method for distinguishing states of abnormal bone metabolism. Nutritional osteoporosis (secondary hyperparathyroidism) and osteomalacia were produced in young rats and compared to a control group. The ratio of early (3 to 6 hrs) to late (4 to 6 days) uptake of barium-131, nitrate, indium-111 EDTMP, and lead-203 were studied, as was that of strontium-85 chloride, a calcium analogue. Ratios of late to early uptake were found to distinguish osteomalacia from osteoporosis in the models when strontium-85 or barium-131 were used. Barium-131 may be a clinically useful alternative to strontium-85 in the evaluation of metabolic bone disease due to its shorter half-life and lower radiation dose
Baum, Rex L.; Miyagi, Toyohiko; Lee, Saro; Trofymchuk, Oleksandr M
2014-01-01
Twenty papers were accepted into the session on landslide hazard mapping for oral presentation. The papers presented susceptibility and hazard analysis based on approaches ranging from field-based assessments to statistically based models to assessments that combined hydromechanical and probabilistic components. Many of the studies have taken advantage of increasing availability of remotely sensed data and nearly all relied on Geographic Information Systems to organize and analyze spatial data. The studies used a range of methods for assessing performance and validating hazard and susceptibility models. A few of the studies presented in this session also included some element of landslide risk assessment. This collection of papers clearly demonstrates that a wide range of approaches can lead to useful assessments of landslide susceptibility and hazard.
International Nuclear Information System (INIS)
Hirakuchi, Hiromaru; Nohara, Daisuke; Sugimoto, Soichiro; Eguchi, Yuzuru; Hattori, Yasuo
2016-01-01
It is necessary for Japanese electric power companies to assess tornado risks on the nuclear power plants according to a new regulation in 2013. The new regulatory guide recommends to select a long narrow strip area along a coast line with the width of 5 km to the seaward and landward sides as a target area of tornado risk assessment, because most of Japanese tornados have been reported near the coast line, where all of Japanese nuclear power plants are located. However, it is very difficult to evaluate a tornado hazard along a coast line, because there is no available information of F-scale and damage length/width on tornadic waterspouts. The purpose of this study is to propose a new tornado wind hazard model for limited area (TOWLA), which can be apply to a long narrow strip area along a coastline. In order to consider tornadic waterspouts moved inland, we evaluate the number of waterspouts entering/passing the targeting area, and add them to the total number of the tornado occurred in the area. A characteristic of the model is to use 'segment lengths' instead of damage lengths. The segment length is a part of the tornado foot print in the long narrow strip area. We show two methods for segment length computation. One is based on tornado records; latitude and longitude of tornado genesis and dissipation locations. The other is to compute the expected segment length based on the geometrical relationship among the damage length, area width, and directional characteristics of tornado movement. The new model can also consider the variation of tornado intensity along the path length and across the path width. (author)
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-03-01
Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License
U.S. Department of Energy Workers' mental models of radiation and chemical hazards in the workplace
International Nuclear Information System (INIS)
Quadrel, M.J.; Blanchard, K.A.; Lundgren, R.E.; McMakin, A.H.; Mosley, M.T.; Strom, D.J.
1994-05-01
A pilot study was performed to test the mental models methodology regarding knowledge and perceptions of U.S. Department of Energy contractor radiation workers about ionizing radiation and hazardous chemicals. The mental models methodology establishes a target population's beliefs about risks and compares them with current scientific knowledge. The ultimate intent is to develop risk communication guidelines that address information gaps or misperceptions that could affect decisions and behavior. In this study, 15 radiation workers from the Hanford Site in Washington State were interviewed about radiation exposure processes and effects. Their beliefs were mapped onto a science model of the same topics to see where differences occurred. In general, workers' mental models covered many of the high-level parts of the science model but did not have the same level of detail. The following concepts appeared to be well understood by most interviewees: types, form, and properties of workplace radiation; administrative and physical controls to reduce radiation exposure risk; and the relationship of dose and effects. However, several concepts were rarely mentioned by most interviewees, indicating potential gaps in worker understanding. Most workers did not discuss the wide range of measures for neutralizing or decontaminating individuals following internal contamination. Few noted specific ways of measuring dose or factors that affect dose. Few mentioned the range of possible effects, including genetic effects, birth defects, or high dose effects. Variables that influence potential effects were rarely discussed. Workers rarely mentioned how basic radiation principles influenced the source, type, or mitigation of radiation risk in the workplace
Estimation in the positive stable shared frailty Cox proportional hazards model
DEFF Research Database (Denmark)
Martinussen, Torben; Pipper, Christian Bressen
2005-01-01
model in situations where the correlated survival data show a decreasing association with time. In this paper, we devise a likelihood based estimation procedure for the positive stable shared frailty Cox model, which is expected to obtain high efficiency. The proposed estimator is provided with large...
Directory of Open Access Journals (Sweden)
Özlem TÜRKŞEN
2018-03-01
Full Text Available Some of the experimental designs can be composed of replicated response measures in which the replications cannot be identified exactly and may have uncertainty different than randomness. Then, the classical regression analysis may not be proper to model the designed data because of the violation of probabilistic modeling assumptions. In this case, fuzzy regression analysis can be used as a modeling tool. In this study, the replicated response values are newly formed to fuzzy numbers by using descriptive statistics of replications and golden ratio. The main aim of the study is obtaining the most suitable fuzzy model for replicated response measures through fuzzification of the replicated values by taking into account the data structure of the replications in statistical framework. Here, the response and unknown model coefficients are considered as triangular type-1 fuzzy numbers (TT1FNs whereas the inputs are crisp. Predicted fuzzy models are obtained according to the proposed fuzzification rules by using Fuzzy Least Squares (FLS approach. The performances of the predicted fuzzy models are compared by using Root Mean Squared Error (RMSE criteria. A data set from the literature, called wheel cover component data set, is used to illustrate the performance of the proposed approach and the obtained results are discussed. The calculation results show that the combined formulation of the descriptive statistics and the golden ratio is the most preferable fuzzification rule according to the well-known decision making method, called TOPSIS, for the data set.
International Nuclear Information System (INIS)
Katsoyiannis, Athanasios; Breivik, Knut
2014-01-01
Polycyclic Aromatic Hydrocarbons (PAHs) molecular diagnostic ratios (MDRs) are unitless concentration ratios of pair-PAHs with the same molecular weight (MW); MDRs have long been used as a tool for PAHs source identification purposes. In the present paper, the efficiency of the MDR methodology is evaluated through the use of a multimedia fate model, the calculation of characteristic travel distances (CTD) and the estimation of air concentrations for individual PAHs as a function of distance from an initial point source. The results show that PAHs with the same MW are sometimes characterized by substantially different CTDs and therefore their air concentrations and hence MDRs are predicted to change as the distance from the original source increases. From the assessed pair-PAHs, the biggest CTD difference is seen for Fluoranthene (107 km) vs. Pyrene (26 km). This study provides a strong indication that MDRs are of limited use as a source identification tool. -- Highlights: • Model-based evaluation of the PAHs molecular diagnostic ratios efficiency. • Individual PAHs are characterized by different characteristic travel distances. • MDRs are proven to be a limited tool for source identification. • Use of MDRs for other environmental media is likely unfeasible. -- PAHs molecular diagnostic ratios which change greatly as a function of distance from the emitting source are improper for source identification purposes
A Coupled Damage and Reaction Model for Simulating Energetic Material Response to Impact Hazards
International Nuclear Information System (INIS)
BAER, MELVIN R.; DRUMHELLER, D.S.; MATHESON, E.R.
1999-01-01
The Baer-Nunziato multiphase reactive theory for a granulated bed of energetic material is extended to allow for dynamic damage processes, that generate new surfaces as well as porosity. The Second Law of Thermodynamics is employed to constrain the constitutive forms of the mass, momentum, and energy exchange functions as well as those for the mechanical damage model ensuring that the models will be dissipative. The focus here is on the constitutive forms of the exchange functions. The mechanical constitutive modeling is discussed in a companion paper. The mechanical damage model provides dynamic surface area and porosity information needed by the exchange functions to compute combustion rates and interphase momentum and energy exchange rates. The models are implemented in the CTH shock physics code and used to simulate delayed detonations due to impacts in a bed of granulated energetic material and an undamaged cylindrical sample
Inverse modeling of GOSAT-retrieved ratios of total column CH4 and CO2 for 2009 and 2010
Directory of Open Access Journals (Sweden)
S. Pandey
2016-04-01
Full Text Available This study investigates the constraint provided by greenhouse gas measurements from space on surface fluxes. Imperfect knowledge of the light path through the atmosphere, arising from scattering by clouds and aerosols, can create biases in column measurements retrieved from space. To minimize the impact of such biases, ratios of total column retrieved CH4 and CO2 (Xratio have been used. We apply the ratio inversion method described in Pandey et al. (2015 to retrievals from the Greenhouse Gases Observing SATellite (GOSAT. The ratio inversion method uses the measured Xratio as a weak constraint on CO2 fluxes. In contrast, the more common approach of inverting proxy CH4 retrievals (Frankenberg et al., 2005 prescribes atmospheric CO2 fields and optimizes only CH4 fluxes. The TM5–4DVAR (Tracer Transport Model version 5–variational data assimilation system inverse modeling system is used to simultaneously optimize the fluxes of CH4 and CO2 for 2009 and 2010. The results are compared to proxy inversions using model-derived CO2 mixing ratios (XCO2model from CarbonTracker and the Monitoring Atmospheric Composition and Climate (MACC Reanalysis CO2 product. The performance of the inverse models is evaluated using measurements from three aircraft measurement projects. Xratio and XCO2model are compared with TCCON retrievals to quantify the relative importance of errors in these components of the proxy XCH4 retrieval (XCH4proxy. We find that the retrieval errors in Xratio (mean = 0.61 % are generally larger than the errors in XCO2model (mean = 0.24 and 0.01 % for CarbonTracker and MACC, respectively. On the annual timescale, the CH4 fluxes from the different satellite inversions are generally in agreement with each other, suggesting that errors in XCO2model do not limit the overall accuracy of the CH4 flux estimates. On the seasonal timescale, however, larger differences are found due to uncertainties in XCO2model, particularly
International Nuclear Information System (INIS)
Shan Ming-Lei; Zhu Chang-Ping; Yao Cheng; Yin Cheng; Jiang Xiao-Yan
2016-01-01
The dynamics of the cavitation bubble collapse is a fundamental issue for the bubble collapse application and prevention. In the present work, the modified forcing scheme for the pseudopotential multi-relaxation-time lattice Boltzmann model developed by Li Q et al. [Li Q, Luo K H and Li X J 2013 Phys. Rev. E 87 053301] is adopted to develop a cavitation bubble collapse model. In the respects of coexistence curves and Laplace law verification, the improved pseudopotential multi-relaxation-time lattice Boltzmann model is investigated. It is found that the thermodynamic consistency and surface tension are independent of kinematic viscosity. By homogeneous and heterogeneous cavitation simulation, the ability of the present model to describe the cavitation bubble development as well as the cavitation inception is verified. The bubble collapse between two parallel walls is simulated. The dynamic process of a collapsing bubble is consistent with the results from experiments and simulations by other numerical methods. It is demonstrated that the present pseudopotential multi-relaxation-time lattice Boltzmann model is applicable and efficient, and the lattice Boltzmann method is an alternative tool for collapsing bubble modeling. (paper)
Yule, D.; Lave, J.; Kumar, S.; Wesnousky, S.
2007-12-01
Himalaya in over 500 years and that Mw 7.5-8.4 earthquakes are the 'moderate' earthquakes'. Further study to constrain the lateral extent and recurrence of the great paleoearthquakes of the central Himalaya is critical to answer important questions about the Himalaya earthquake cycle and the seismic hazard facing the rapidly urbanizing population of the region.
Ichida, J M; Wassell, J T; Keller, M D; Ayers, L W
1993-02-01
Survival analysis methods are valuable for detecting intervention effects because detailed information from patient records and sensitive outcome measures are used. The burn unit at a large university hospital replaced routine bathing with total body bathing using chlorhexidine gluconate for antimicrobial effect. A Cox proportional hazards model was used to analyse time from admission until either infection with Staphylococcus aureus or discharge for 155 patients, controlling for burn severity and two time-dependent covariates: days until first wound excision and days until first administration of prophylactic antibiotics. The risk of infection was 55 per cent higher in the historical control group, although not statistically significant. There was also some indication that early wound excision may be important as an infection-control measure for burn patients.
Azzaro, Raffaele; Barberi, Graziella; D'Amico, Salvatore; Pace, Bruno; Peruzza, Laura; Tuvè, Tiziana
2017-11-01
The volcanic region of Mt. Etna (Sicily, Italy) represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA), the first results and maps of which are presented in a companion paper, Peruzza et al. (2017). The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades). The analysis of the frequency-magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude-size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool - FiSH (Pace et al., 2016) - that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be implemented in PSHA maps
Directory of Open Access Journals (Sweden)
S. K. Allen
2009-03-01
Full Text Available Flood and mass movements originating from glacial environments are particularly devastating in populated mountain regions of the world, but in the remote Mount Cook region of New Zealand's Southern Alps minimal attention has been given to these processes. Glacial environments are characterized by high mass turnover and combined with changing climatic conditions, potential problems and process interactions can evolve rapidly. Remote sensing based terrain mapping, geographic information systems and flow path modelling are integrated here to explore the extent of ice avalanche, debris flow and lake flood hazard potential in the Mount Cook region. Numerous proglacial lakes have formed during recent decades, but well vegetated, low gradient outlet areas suggest catastrophic dam failure and flooding is unlikely. However, potential impacts from incoming mass movements of ice, debris or rock could lead to dam overtopping, particularly where lakes are forming directly beneath steep slopes. Physically based numerical modeling with RAMMS was introduced for local scale analyses of rock avalanche events, and was shown to be a useful tool for establishing accurate flow path dynamics and estimating potential event magnitudes. Potential debris flows originating from steep moraine and talus slopes can reach road and built infrastructure when worst-case runout distances are considered, while potential effects from ice avalanches are limited to walking tracks and alpine huts located in close proximity to initiation zones of steep ice. Further local scale studies of these processes are required, leading towards a full hazard assessment, and changing glacial conditions over coming decades will necessitate ongoing monitoring and reassessment of initiation zones and potential impacts.
Witte, L.
2014-06-01
To support landing site assessments for HDA-capable flight systems and to facilitate trade studies between the potential HDA architectures versus the yielded probability of safe landing a stochastic landing dispersion model has been developed.
International Nuclear Information System (INIS)
Baruffi, F.; Cisotto, A.; Cimolino, A.; Ferri, M.; Monego, M.; Norbiato, D.; Cappelletto, M.; Bisaglia, M.; Pretner, A.; Galli, A.; Scarinci, A.; Marsala, V.; Panelli, C.; Gualdi, S.; Bucchignani, E.; Torresan, S.; Pasini, S.; Critto, A.
2012-01-01
Climate change impacts on water resources, particularly groundwater, is a highly debated topic worldwide, triggering international attention and interest from both researchers and policy makers due to its relevant link with European water policy directives (e.g. 2000/60/EC and 2007/118/EC) and related environmental objectives. The understanding of long-term impacts of climate variability and change is therefore a key challenge in order to address effective protection measures and to implement sustainable management of water resources. This paper presents the modeling approach adopted within the Life + project TRUST (Tool for Regional-scale assessment of groUndwater Storage improvement in adaptation to climaTe change) in order to provide climate change hazard scenarios for the shallow groundwater of high Veneto and Friuli Plain, Northern Italy. Given the aim to evaluate potential impacts on water quantity and quality (e.g. groundwater level variation, decrease of water availability for irrigation, variations of nitrate infiltration processes), the modeling approach integrated an ensemble of climate, hydrologic and hydrogeologic models running from the global to the regional scale. Global and regional climate models and downscaling techniques were used to make climate simulations for the reference period 1961–1990 and the projection period 2010–2100. The simulation of the recent climate was performed using observed radiative forcings, whereas the projections have been done prescribing the radiative forcings according to the IPCC A1B emission scenario. The climate simulations and the downscaling, then, provided the precipitation, temperatures and evapo-transpiration fields used for the impact analysis. Based on downscaled climate projections, 3 reference scenarios for the period 2071–2100 (i.e. the driest, the wettest and the mild year) were selected and used to run a regional geomorphoclimatic and hydrogeological model. The final output of the model ensemble
Energy Technology Data Exchange (ETDEWEB)
Baruffi, F. [Autorita di Bacino dei Fiumi dell' Alto Adriatico, Cannaregio 4314, 30121 Venice (Italy); Cisotto, A., E-mail: segreteria@adbve.it [Autorita di Bacino dei Fiumi dell' Alto Adriatico, Cannaregio 4314, 30121 Venice (Italy); Cimolino, A.; Ferri, M.; Monego, M.; Norbiato, D.; Cappelletto, M.; Bisaglia, M. [Autorita di Bacino dei Fiumi dell' Alto Adriatico, Cannaregio 4314, 30121 Venice (Italy); Pretner, A.; Galli, A. [SGI Studio Galli Ingegneria, via della Provvidenza 13, 35030 Sarmeola di Rubano (PD) (Italy); Scarinci, A., E-mail: andrea.scarinci@sgi-spa.it [SGI Studio Galli Ingegneria, via della Provvidenza 13, 35030 Sarmeola di Rubano (PD) (Italy); Marsala, V.; Panelli, C. [SGI Studio Galli Ingegneria, via della Provvidenza 13, 35030 Sarmeola di Rubano (PD) (Italy); Gualdi, S., E-mail: silvio.gualdi@bo.ingv.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Bucchignani, E., E-mail: e.bucchignani@cira.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Torresan, S., E-mail: torresan@cmcc.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Pasini, S., E-mail: sara.pasini@stud.unive.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, Calle Larga S. Marta 2137, 30123 Venice (Italy); Critto, A., E-mail: critto@unive.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, Calle Larga S. Marta 2137, 30123 Venice (Italy); and others
2012-12-01
Climate change impacts on water resources, particularly groundwater, is a highly debated topic worldwide, triggering international attention and interest from both researchers and policy makers due to its relevant link with European water policy directives (e.g. 2000/60/EC and 2007/118/EC) and related environmental objectives. The understanding of long-term impacts of climate variability and change is therefore a key challenge in order to address effective protection measures and to implement sustainable management of water resources. This paper presents the modeling approach adopted within the Life + project TRUST (Tool for Regional-scale assessment of groUndwater Storage improvement in adaptation to climaTe change) in order to provide climate change hazard scenarios for the shallow groundwater of high Veneto and Friuli Plain, Northern Italy. Given the aim to evaluate potential impacts on water quantity and quality (e.g. groundwater level variation, decrease of water availability for irrigation, variations of nitrate infiltration processes), the modeling approach integrated an ensemble of climate, hydrologic and hydrogeologic models running from the global to the regional scale. Global and regional climate models and downscaling techniques were used to make climate simulations for the reference period 1961-1990 and the projection period 2010-2100. The simulation of the recent climate was performed using observed radiative forcings, whereas the projections have been done prescribing the radiative forcings according to the IPCC A1B emission scenario. The climate simulations and the downscaling, then, provided the precipitation, temperatures and evapo-transpiration fields used for the impact analysis. Based on downscaled climate projections, 3 reference scenarios for the period 2071-2100 (i.e. the driest, the wettest and the mild year) were selected and used to run a regional geomorphoclimatic and hydrogeological model. The final output of the model ensemble produced
Cochachin, Alejo; Frey, Holger; Huggel, Christian; Strozzi, Tazio; Büechi, Emanuel; Cui, Fanpeng; Flores, Andrés; Saito, Carlos
2017-04-01
The Safuna glacial lakes (77˚ 37' W, 08˚ 50' S) are located in the headwater of the Tayapampa catchment, in the northernmost part of the Cordillera Blanca, Peru. The upper lake, Laguna Safuna Alta at 4354 m asl has formed in the 1960s behind a terminal moraine of the retreating Pucajirca Glacier, named after the peak south of the lakes. Safuna Alta currently has a volume of 15 x 106 m3. In 2002 a rock fall of several million m3 from the proximal left lateral moraine hit the Safuna Alta lake and triggered an impact wave which overtopped the moraine dam and passed into the lower lake, Laguna Safuna Baja, which absorbed most of the outburst flood from the upper lake, but nevertheless causing loss in cattle, degradation of agricultural land downstream and damages to a hydroelectric power station in Quitaracsa gorge. Event reconstructions showed that the impact wave in the Safuna Alta lake had a runup height of 100 m or more, and weakened the moraine dam of Safuna Alta. This fact, in combination with the large lake volumes and the continued possibility for landslides from the left proximal moraine pose a considerable risk for the downstream settlements as well as the recently completed Quitaracsa hydroelectric power plant. In the framework of a project funded by the European Space Agency (ESA), the hazard situation at the Safuna Alta lake is assessed by a combination of satellite radar data analysis, field investigations, and slope stability modeling. Interferometric analyses of the Synthetic Aperture Radar (InSAR) of ALOS-1 Palsar-1, ALOS-2 Palsar-2 and Sentinel-1 data from 2016 reveal terrain displacements of 2 cm y-1 in the detachment zone of the 2002 rock avalanche. More detailed insights into the characteristics of these terrain deformations are gained by repeat surveys with differential GPS (DGPS) and tachymetric measurements. A drone flight provides the information for the generation of a high-resolution digital elevation model (DEM), which is used for the
Directory of Open Access Journals (Sweden)
Zongmin Yue
2013-01-01
Full Text Available We investigated the dynamics of a diffusive ratio-dependent Holling-Tanner predator-prey model with Smith growth subject to zero-flux boundary condition. Some qualitative properties, including the dissipation, persistence, and local and global stability of positive constant solution, are discussed. Moreover, we give the refined a priori estimates of positive solutions and derive some results for the existence and nonexistence of nonconstant positive steady state.
Charge–mass ratio bound and optimization in the Parikh–Wilczek tunneling model of Hawking radiation
International Nuclear Information System (INIS)
Kim, Kyung Kiu; Wen, Wen-Yu
2014-01-01
In this Letter, we study the mutual information hidden in the Parikh–Wilczek tunneling model of Hawking radiation for Reissner–Nordström black holes. We argue that the condition of nonnegativity of mutual information suggests bound(s) for charge–mass ratio of emitted particles. We further view the radiation as an optimization process and discuss its effect on time evolution of a charged black hole.
Investigation of a new model accounting for rotors of finite tip-speed ratio in yaw or tilt
International Nuclear Information System (INIS)
Branlard, E; Gaunaa, M; Machefaux, E
2014-01-01
The main results from a recently developed vortex model are implemented into a Blade Element Momentum(BEM) code. This implementation accounts for the effect of finite tip-speed ratio, an effect which was not considered in standard BEM yaw-models. The model and its implementation are presented. Data from the MEXICO experiment are used as a basis for validation. Three tools using the same 2D airfoil coefficient data are compared: a BEM code, an Actuator-Line and a vortex code. The vortex code is further used to validate the results from the newly implemented BEM yaw-model. Significant improvements are obtained for the prediction of loads and induced velocities. Further relaxation of the main assumptions of the model are briefly presented and discussed
Indian Academy of Sciences (India)
Our attraction to another body increases if the body is symmetricaland in proportion. If a face or a structure is in proportion,we are more likely to notice it and find it beautiful.The universal ratio of beauty is the 'Golden Ratio', found inmany structures. This ratio comes from Fibonacci numbers.In this article, we explore this ...
Indian Academy of Sciences (India)
Keywords. Fibonacci numbers, golden ratio, Sanskrit prosody, solar panel. Abstract. Our attraction to another body increases if the body is symmetricaland in proportion. If a face or a structure is in proportion,we are more likely to notice it and find it beautiful.The universal ratio of beauty is the 'Golden Ratio', found inmany ...
Indian Academy of Sciences (India)
Our attraction to another body increases if the body is sym- metrical and in proportion. If a face or a structure is in pro- portion, we are more likely to notice it and find it beautiful. The universal ratio of beauty is the 'Golden Ratio', found in many structures. This ratio comes from Fibonacci numbers. In this article, we explore this ...
A Diffuse Interface Model for Incompressible Two-Phase Flow with Large Density Ratios
Xie, Yu; Wodo, Olga; Ganapathysubramanian, Baskar
2016-01-01
In this chapter, we explore numerical simulations of incompressible and immiscible two-phase flows. The description of the fluid–fluid interface is introduced via a diffuse interface approach. The two-phase fluid system is represented by a coupled Cahn–Hilliard Navier–Stokes set of equations. We discuss challenges and approaches to solving this coupled set of equations using a stabilized finite element formulation, especially in the case of a large density ratio between the two fluids. Specific features that enabled efficient solution of the equations include: (i) a conservative form of the convective term in the Cahn–Hilliard equation which ensures mass conservation of both fluid components; (ii) a continuous formula to compute the interfacial surface tension which results in lower requirement on the spatial resolution of the interface; and (iii) a four-step fractional scheme to decouple pressure from velocity in the Navier–Stokes equation. These are integrated with standard streamline-upwind Petrov–Galerkin stabilization to avoid spurious oscillations. We perform numerical tests to determine the minimal resolution of spatial discretization. Finally, we illustrate the accuracy of the framework using the analytical results of Prosperetti for a damped oscillating interface between two fluids with a density contrast.
A Diffuse Interface Model for Incompressible Two-Phase Flow with Large Density Ratios
Xie, Yu
2016-10-04
In this chapter, we explore numerical simulations of incompressible and immiscible two-phase flows. The description of the fluid–fluid interface is introduced via a diffuse interface approach. The two-phase fluid system is represented by a coupled Cahn–Hilliard Navier–Stokes set of equations. We discuss challenges and approaches to solving this coupled set of equations using a stabilized finite element formulation, especially in the case of a large density ratio between the two fluids. Specific features that enabled efficient solution of the equations include: (i) a conservative form of the convective term in the Cahn–Hilliard equation which ensures mass conservation of both fluid components; (ii) a continuous formula to compute the interfacial surface tension which results in lower requirement on the spatial resolution of the interface; and (iii) a four-step fractional scheme to decouple pressure from velocity in the Navier–Stokes equation. These are integrated with standard streamline-upwind Petrov–Galerkin stabilization to avoid spurious oscillations. We perform numerical tests to determine the minimal resolution of spatial discretization. Finally, we illustrate the accuracy of the framework using the analytical results of Prosperetti for a damped oscillating interface between two fluids with a density contrast.
Persistent junk solutions in time-domain modeling of extreme mass ratio binaries
International Nuclear Information System (INIS)
Field, Scott E.; Hesthaven, Jan S.; Lau, Stephen R.
2010-01-01
In the context of metric perturbation theory for nonspinning black holes, extreme mass ratio binary systems are described by distributionally forced master wave equations. Numerical solution of a master wave equation as an initial boundary value problem requires initial data. However, because the correct initial data for generic-orbit systems is unknown, specification of trivial initial data is a common choice, despite being inconsistent and resulting in a solution which is initially discontinuous in time. As is well known, this choice leads to a burst of junk radiation which eventually propagates off the computational domain. We observe another potential consequence of trivial initial data: development of a persistent spurious solution, here referred to as the Jost junk solution, which contaminates the physical solution for long times. This work studies the influence of both types of junk on metric perturbations, waveforms, and self-force measurements, and it demonstrates that smooth modified source terms mollify the Jost solution and reduce junk radiation. Our concluding section discusses the applicability of these observations to other numerical schemes and techniques used to solve distributionally forced master wave equations.
Shields, Matt
The development of Micro Aerial Vehicles has been hindered by the poor understanding of the aerodynamic loading and stability and control properties of the low Reynolds number regime in which the inherent low aspect ratio (LAR) wings operate. This thesis experimentally evaluates the static and damping aerodynamic stability derivatives to provide a complete aerodynamic model for canonical flat plate wings of aspect ratios near unity at Reynolds numbers under 1 x 105. This permits the complete functionality of the aerodynamic forces and moments to be expressed and the equations of motion to solved, thereby identifying the inherent stability properties of the wing. This provides a basis for characterizing the stability of full vehicles. The influence of the tip vortices during sideslip perturbations is found to induce a loading condition referred to as roll stall, a significant roll moment created by the spanwise induced velocity asymmetry related to the displacement of the vortex cores relative to the wing. Roll stall is manifested by a linearly increasing roll moment with low to moderate angles of attack and a subsequent stall event similar to a lift polar; this behavior is not experienced by conventional (high aspect ratio) wings. The resulting large magnitude of the roll stability derivative, Cl,beta and lack of roll damping, Cl ,rho, create significant modal responses of the lateral state variables; a linear model used to evaluate these modes is shown to accurately reflect the solution obtained by numerically integrating the nonlinear equations. An unstable Dutch roll mode dominates the behavior of the wing for small perturbations from equilibrium, and in the presence of angle of attack oscillations a previously unconsidered coupled mode, referred to as roll resonance, is seen develop and drive the bank angle? away from equilibrium. Roll resonance requires a linear time variant (LTV) model to capture the behavior of the bank angle, which is attributed to the
Centers for Disease Control (CDC) Podcasts
2007-04-10
Chemicals are a part of our daily lives, providing many products and modern conveniences. With more than three decades of experience, The Centers for Disease Control and Prevention (CDC) has been in the forefront of efforts to protect and assess people's exposure to environmental and hazardous chemicals. This report provides information about hazardous chemicals and useful tips on how to protect you and your family from harmful exposure. Created: 4/10/2007 by CDC National Center for Environmental Health. Date Released: 4/13/2007.
Deep gray matter demyelination detected by magnetization transfer ratio in the cuprizone model.
Directory of Open Access Journals (Sweden)
Sveinung Fjær
Full Text Available In multiple sclerosis (MS, the correlation between lesion load on conventional magnetic resonance imaging (MRI and clinical disability is weak. This clinico-radiological paradox might partly be due to the low sensitivity of conventional MRI to detect gray matter demyelination. Magnetization transfer ratio (MTR has previously been shown to detect white matter demyelination in mice. In this study, we investigated whether MTR can detect gray matter demyelination in cuprizone exposed mice. A total of 54 female C57BL/6 mice were split into one control group ( and eight cuprizone exposed groups ([Formula: see text]. The mice were exposed to [Formula: see text] (w/w cuprizone for up to six weeks. MTR images were obtained at a 7 Tesla Bruker MR-scanner before cuprizone exposure, weekly for six weeks during cuprizone exposure, and once two weeks after termination of cuprizone exposure. Immunohistochemistry staining for myelin (anti-Proteolopid Protein and oligodendrocytes (anti-Neurite Outgrowth Inhibitor Protein A was obtained after each weekly scanning. Rates of MTR change and correlations between MTR values and histological findings were calculated in five brain regions. In the corpus callosum and the deep gray matter a significant rate of MTR value decrease was found, [Formula: see text] per week ([Formula: see text] and [Formula: see text] per week ([Formula: see text] respectively. The MTR values correlated to myelin loss as evaluated by immunohistochemistry (Corpus callosum: [Formula: see text]. Deep gray matter: [Formula: see text], but did not correlate to oligodendrocyte density. Significant results were not found in the cerebellum, the olfactory bulb or the cerebral cortex. This study shows that MTR can be used to detect demyelination in the deep gray matter, which is of particular interest for imaging of patients with MS, as deep gray matter demyelination is common in MS, and is not easily detected on conventional clinical MRI.
Sandford, M. C.; Ricketts, R. H.; Watson, J. J.
1981-01-01
A high aspect ratio supercritical wing with oscillating control surfaces is described. The semispan wing model was instrumented with 252 static orifices and 164 in situ dynamic pressure gases for studying the effects of control surface position and sinusoidal motion on steady and unsteady pressures. Data from the present test (this is the second in a series of tests on this model) were obtained in the Langley Transonic Dynamics Tunnel at Mach numbers of 0.60 and 0.78 and are presented in tabular form.
DEFF Research Database (Denmark)
Hald, Tine
methodological uncertainties, and therefore, preferences for types of models cannot be specified. Newer approaches need to be identified and considered. Fit for purpose and simplicity are key issues when developing QMRA models. However, limits on time and resources may restrict the model selection. At the start......” should be used carefully, with scientific criteria and context clearly defined, or avoided....
Development of Nonlinear Flight Mechanical Model of High Aspect Ratio Light Utility Aircraft
Bahri, S.; Sasongko, R. A.
2018-04-01
The implementation of Flight Control Law (FCL) for Aircraft Electronic Flight Control System (EFCS) aims to reduce pilot workload, while can also enhance the control performance during missions that require long endurance flight and high accuracy maneuver. In the development of FCL, a quantitative representation of the aircraft dynamics is needed for describing the aircraft dynamics characteristic and for becoming the basis of the FCL design. Hence, a 6 Degree of Freedom nonlinear model of a light utility aircraft dynamics, also called the nonlinear Flight Mechanical Model (FMM), is constructed. This paper shows the construction of FMM from mathematical formulation, the architecture design of FMM, the trimming process and simulations. The verification of FMM is done by analysis of aircraft behaviour in selected trimmed conditions.
Ratios of Vector and Pseudoscalar B Meson Decay Constants in the Light-Cone Quark Model
Dhiman, Nisha; Dahiya, Harleen
2018-05-01
We study the decay constants of pseudoscalar and vector B meson in the framework of light-cone quark model. We apply the variational method to the relativistic Hamiltonian with the Gaussian-type trial wave function to obtain the values of β (scale parameter). Then with the help of known values of constituent quark masses, we obtain the numerical results for the decay constants f_P and f_V, respectively. We compare our numerical results with the existing experimental data.
International Nuclear Information System (INIS)
Lima, F.R.A.; Kramer, R.; Khoury, H.J.; Santos, A.M.; Loureiro, E.C.M.
2005-01-01
The development of new and sophisticated Monte Carlo codes and tomographic human phantoms or voxels motivated the International Commission on Radiological Protection (ICRP) to revise the traditional models of exposure, which have been used to calculate effective dose coefficients for organs and tissues based on mathematician phantoms known as MIRD5. This paper shows the results of calculations using tomographic phantoms MAX (Male Adult voXel) and FAX (Female Adult voXel), recently developed by the authors as well as with the phantoms ADAM and EVA, of specific genres, type MIRD5, coupled to the EGS4 Monte Carlo and MCNP4C codes, for internal exposure with photons of energies between 10 keV and 4 MeV to several organs sources. Effective Doses for both models, tomographic and mathematician, will be compared separately as a function of the Monte Carlo code replacement, of compositions of human tissues and the anatomy reproduced through tomographs. The results indicate that for photon internal exposure, the use of models of exposure based in voxel, increases the values of effective doses up to 70% for some organs sources considered in this study, when compared with the corresponding results obtained with phantoms of MIRD-5 type
Numerical estimation of wall friction ratio near the pseudo-critical point with CFD-models
International Nuclear Information System (INIS)
Angelucci, M.; Ambrosini, W.; Forgione, N.
2013-01-01
In this paper, the STAR-CCM+ CFD code is used in the attempt to reproduce the values of friction factor observed in experimental data at supercritical pressures at various operating conditions. A short survey of available data and correlations for smooth pipe friction in circular pipes puts the basis for the discussion, reporting observed trends of friction factor in the liquid-like and the gas-like regions and within the transitional region across the pseudo-critical temperature. For smooth pipes, a general decrease of the friction factor in the transitional region is reported, constituting one of the relevant effects to be predicted by the computational fluid-dynamic models. A limited number of low-Reynolds number models are adopted, making use of refined near-wall discretisation as required by the constraint y + < 1 at the wall. In particular, the Lien k–ε and the SST k–ω models are considered. The values of the wall shear stress calculated by the code are then post-processed on the basis of bulk fluid properties to obtain the Fanning and then the Darcy–Weisbach friction factors, based on their classical definitions. The obtained values are compared with those provided by experimental tests and correlations, finding a reasonable qualitative agreement. Expectedly, the agreement is better in the gas-like and liquid-like regions, where fluid property changes are moderate, than in the transitional region, where the trends provided by available correlations are reproduced only in a qualitative way
Models for recurrent gas release event behavior in hazardous waste tanks
International Nuclear Information System (INIS)
Anderson, D.N.; Arnold, B.C.
1994-08-01
Certain radioactive waste storage tanks at the United States Department of Energy Hanford facilities continuously generate gases as a result of radiolysis and chemical reactions. The congealed sludge in these tanks traps the gases and causes the level of the waste within the tanks to rise. The waste level continues to rise until the sludge becomes buoyant and ''rolls over'', changing places with heavier fluid on top. During a rollover, the trapped gases are released, resulting, in a sudden drop in the waste level. This is known as a gas release event (GRE). After a GRE, the wastes leading to another GRE. We present nonlinear time waste re-congeals and gas again accumulates leading to another GRE. We present nonlinear time series models that produce simulated sample paths that closely resemble the temporal history of waste levels in these tanks. The models also imitate the random GRE, behavior observed in the temporal waste level history of a storage tank. We are interested in using the structure of these models to understand the probabilistic behavior of the random variable ''time between consecutive GRE's''. Understanding the stochastic nature of this random variable is important because the hydrogen and nitrous oxide gases released from a GRE, are flammable and the ammonia that is released is a health risk. From a safety perspective, activity around such waste tanks should be halted when a GRE is imminent. With credible GRE models, we can establish time windows in which waste tank research and maintenance activities can be safely performed
This presentation discusses methods used to extrapolate from in vitro high-throughput screening (HTS) toxicity data for an endocrine pathway to in vivo for early life stages in humans, and the use of a life stage PBPK model to address rapidly changing physiological parameters. A...
Hazard rate model and statistical analysis of a compound point process
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2