Transparent Model Transformation: Turning Your Favourite Model Editor into a Transformation Tool
DEFF Research Database (Denmark)
Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel
2015-01-01
Current model transformation languages are supported by dedicated editors, often closely coupled to a single execution engine. We introduce Transparent Model Transformation, a paradigm enabling modelers to specify transformations using a familiar tool: their model editor. We also present VMTL, th...... model transformation tool sharing the model editor’s benefits, transparently....
A Model for Generating Multi-hazard Scenarios
Lo Jacomo, A.; Han, D.; Champneys, A.
2017-12-01
Communities in mountain areas are often subject to risk from multiple hazards, such as earthquakes, landslides, and floods. Each hazard has its own different rate of onset, duration, and return period. Multiple hazards tend to complicate the combined risk due to their interactions. Prioritising interventions for minimising risk in this context is challenging. We developed a probabilistic multi-hazard model to help inform decision making in multi-hazard areas. The model is applied to a case study region in the Sichuan province in China, using information from satellite imagery and in-situ data. The model is not intended as a predictive model, but rather as a tool which takes stakeholder input and can be used to explore plausible hazard scenarios over time. By using a Monte Carlo framework and varrying uncertain parameters for each of the hazards, the model can be used to explore the effect of different mitigation interventions aimed at reducing the disaster risk within an uncertain hazard context.
Confidence intervals for the first crossing point of two hazard functions.
Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng
2009-12-01
The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.
A ¤flexible additive multiplicative hazard model
DEFF Research Database (Denmark)
Martinussen, T.; Scheike, T. H.
2002-01-01
Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect......Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect...
Comparative Distributions of Hazard Modeling Analysis
Directory of Open Access Journals (Sweden)
Rana Abdul Wajid
2006-07-01
Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.
The linear transformation model with frailties for the analysis of item response times.
Wang, Chun; Chang, Hua-Hua; Douglas, Jeffrey A
2013-02-01
The item response times (RTs) collected from computerized testing represent an underutilized source of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. In this paper, we propose a semi-parametric model for RTs, the linear transformation model with a latent speed covariate, which combines the flexibility of non-parametric modelling and the brevity as well as interpretability of parametric modelling. In this new model, the RTs, after some non-parametric monotone transformation, become a linear model with latent speed as covariate plus an error term. The distribution of the error term implicitly defines the relationship between the RT and examinees' latent speeds; whereas the non-parametric transformation is able to describe various shapes of RT distributions. The linear transformation model represents a rich family of models that includes the Cox proportional hazards model, the Box-Cox normal model, and many other models as special cases. This new model is embedded in a hierarchical framework so that both RTs and responses are modelled simultaneously. A two-stage estimation method is proposed. In the first stage, the Markov chain Monte Carlo method is employed to estimate the parametric part of the model. In the second stage, an estimating equation method with a recursive algorithm is adopted to estimate the non-parametric transformation. Applicability of the new model is demonstrated with a simulation study and a real data application. Finally, methods to evaluate the model fit are suggested. © 2012 The British Psychological Society.
Model Validation in Ontology Based Transformations
Directory of Open Access Journals (Sweden)
Jesús M. Almendros-Jiménez
2012-10-01
Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.
A high-resolution global flood hazard model
Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.
2015-09-01
Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.
Further Results on Dynamic Additive Hazard Rate Model
Directory of Open Access Journals (Sweden)
Zhengcheng Zhang
2014-01-01
Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.
Social transformation in transdisciplinary natural hazard management
Attems, Marie-Sophie; Fuchs, Sven; Thaler, Thomas
2017-04-01
Due to annual increases of natural hazard losses, there is a discussion among authorities and communities in Europe on innovative solutions to increase resilience, and consequently, business-as-usual in risk management practices is often questioned. Therefore, the current situation of risk management requests a societal transformation to response adequately and effectively to the new global dynamics. An emerging concept is the implementation of multiple-use mitigation systems against hazards such as floods, avalanches and land-slides. However, one key aspect refers to the involvement of knowledge outside academic research. Therefore, transdisciplinary knowledge can be used to discuss vital factors which are needed to upscale the implementation of multiple-use mitigation measures. The method used in this contribution is an explorative scenario analysis applied in Austria and processes the knowledge gained in transdisciplinary workshops. The scenario analysis combines qualitative data and the quantitative relations in order to generate a set of plausible future outcomes. The goal is to establish a small amount of consistent scenarios, which are efficient and thereby representative as well as significantly different from each other. The results of the discussions among relevant stakeholders within the workshops and a subsequent quantitative analysis, showed that vital variables influencing the multiple use of mitigation measures are the (1) current legislation, (2) risk acceptance among authorities and the public, (3) land-use pressure, (4) the demand for innovative solutions, (5) the available technical standards and possibilities and (6) finally the policy entrepreneurship. Four different scenarios were the final result of the analysis. Concluding the results, in order to make multiple-use alleviations systems possible contemporary settings concerning risk management strategies will have to change in the future. Legislation and thereby current barriers have to be
Lin, Sijie; Taylor, Alicia A; Ji, Zhaoxia; Chang, Chong Hyun; Kinsinger, Nichola M; Ueng, William; Walker, Sharon L; Nel, André E
2015-02-24
Although copper-containing nanoparticles are used in commercial products such as fungicides and bactericides, we presently do not understand the environmental impact on other organisms that may be inadvertently exposed. In this study, we used the zebrafish embryo as a screening tool to study the potential impact of two nano Cu-based materials, CuPRO and Kocide, in comparison to nanosized and micron-sized Cu and CuO particles in their pristine form (0-10 ppm) as well as following their transformation in an experimental wastewater treatment system. This was accomplished by construction of a modeled domestic septic tank system from which effluents could be retrieved at different stages following particle introduction (10 ppm). The Cu speciation in the effluent was identified as nondissolvable inorganic Cu(H2PO2)2 and nondiffusible organic Cu by X-ray diffraction, inductively coupled plasma mass spectrometry (ICP-MS), diffusive gradients in thin-films (DGT), and Visual MINTEQ software. While the nanoscale materials, including the commercial particles, were clearly more potent (showing 50% hatching interference above 0.5 ppm) than the micron-scale particulates with no effect on hatching up to 10 ppm, the Cu released from the particles in the septic tank underwent transformation into nonbioavailable species that failed to interfere with the function of the zebrafish embryo hatching enzyme. Moreover, we demonstrate that the addition of humic acid, as an organic carbon component, could lead to a dose-dependent decrease in Cu toxicity in our high content zebrafish embryo screening assay. Thus, the use of zebrafish embryo screening, in combination with the effluents obtained from a modeled exposure environment, enables a bioassay approach to follow the change in the speciation and hazard potential of Cu particles instead of difficult-to-perform direct particle tracking.
Test-driven verification/validation of model transformations
Institute of Scientific and Technical Information of China (English)
László LENGYEL; Hassan CHARAF
2015-01-01
Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans-formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.
INFORMATION MODEL OF SOCIAL TRANSFORMATIONS
Directory of Open Access Journals (Sweden)
Мария Васильевна Комова
2013-09-01
Full Text Available The social transformation is considered as a process of qualitative changes of the society, creating a new level of organization in all areas of life, in different social formations, societies of different types of development. The purpose of the study is to create a universal model for studying social transformations based on their understanding as the consequence of the information exchange processes in the society. After defining the conceptual model of the study, the author uses the following methods: the descriptive method, analysis, synthesis, comparison.Information, objectively existing in all elements and systems of the material world, is an integral attribute of the society transformation as well. The information model of social transformations is based on the definition of the society transformation as the change in the information that functions in the society’s information space. The study of social transformations is the study of information flows circulating in the society and being characterized by different spatial, temporal, and structural states. Social transformations are a highly integrated system of social processes and phenomena, the nature, course and consequences of which are affected by the factors representing the whole complex of material objects. The integrated information model of social transformations foresees the interaction of the following components: social memory, information space, and the social ideal. To determine the dynamics and intensity of social transformations the author uses the notions of "information threshold of social transformations" and "information pressure".Thus, the universal nature of information leads to considering social transformations as a system of information exchange processes. Social transformations can be extended to any episteme actualized by social needs. The establishment of an information threshold allows to simulate the course of social development, to predict the
CalTOX, a multimedia total exposure model for hazardous-waste sites
International Nuclear Information System (INIS)
McKone, T.E.
1993-06-01
CalTOX has been developed as a spreadsheet model to assist in health-risk assessments that address contaminated soils and the contamination of adjacent air, surface water, sediments, and ground water. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify and reduce uncertainty in multimedia, multiple-pathway exposure models. This report provides an overview of the CalTOX model components, lists the objectives of the model, describes the philosophy under which the model was developed, identifies the chemical classes for which the model can be used, and describes critical sensitivities and uncertainties. The multimedia transport and transformation model is a dynamic model that can be used to assess time-varying concentrations of contaminants introduced initially to soil layers or for contaminants released continuously to air or water. This model assists the user in examining how chemical and landscape properties impact both the ultimate route and quantity of human contact. Multimedia, multiple pathway exposure models are used in the CalTOX model to estimate average daily potential doses within a human population in the vicinity of a hazardous substances release site. The exposure models encompass twenty-three exposure pathways. The exposure assessment process consists of relating contaminant concentrations in the multimedia model compartments to contaminant concentrations in the media with which a human population has contact (personal air, tap water, foods, household dusts soils, etc.). The average daily dose is the product of the exposure concentrations in these contact media and an intake or uptake factor that relates the concentrations to the distributions of potential dose within the population
Incident Duration Modeling Using Flexible Parametric Hazard-Based Models
Directory of Open Access Journals (Sweden)
Ruimin Li
2014-01-01
Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.
Lofts, Stephen; Keller, Virginie; Dumont, Egon; Williams, Richard; Praetorius, Antonia; von der Kammer, Frank
2016-04-01
The development of innovative new chemical products is a key aspect of the modern economy, yet society demands that such development is environmentally sustainable. Developing knowledge of how new classes of chemicals behave following release to the environment is key to understanding the hazards that will potentially result. Nanoparticles are a key example of a class of chemicals that have undergone a significant expansion in production and use in recent years and so there is a need to develop tools to predict their potential hazard following their deliberate or incidental release to the environment. Generalising the understanding of the environmental behaviour of manufactured nanoparticles in general is challenging, as they are chemically and physically diverse (e.g. metals, metal oxides, carbon nanotubes, cellulose, quantum dots). Furthermore, nanoparticles may be manufactured with capping agents to modify their desired behaviour in industrial applications; such agents may also influence their environmental behaviour. Also, nanoparticles may become significantly modified from their as-manufactured forms both prior to and after the point of environmental release. Tools for predicting nanoparticle behaviour and hazard need to be able to consider a wide range of release scenarios and aspects of nanoparticle behaviour in the environment (e.g. dissolution, transformation of capping agents, agglomeration and aggregation behaviour), where such behaviours are not shared by all types of nanoparticle. This implies the need for flexible, futureproofed tools capable of being updated to take new understanding of behavioural processes into account as such knowledge emerges. This presentation will introduce the NanoFASE model system, a multimedia modelling framework for the transport, transformation and biouptake of manufactured nanoparticles. The complete system will comprise atmospheric, terrestrial and aquatic compartments to allow holistic simulation of nanoparticles; this
An optimization model for transportation of hazardous materials
International Nuclear Information System (INIS)
Seyed-Hosseini, M.; Kheirkhah, A. S.
2005-01-01
In this paper, the optimal routing problem for transportation of hazardous materials is studied. Routing for the purpose of reducing the risk of transportation of hazardous materials has been studied and formulated by many researcher and several routing models have been presented up to now. These models can be classified into the categories: the models for routing a single movement and the models for routing multiple movements. In this paper, according to the current rules and regulations of road transportations of hazardous materials in Iran, a routing problem is designed. In this problem, the routs for several independent movements are simultaneously determined. To examine the model, the problem the transportations of two different dangerous materials in the road network of Mazandaran province in the north of Iran is formulated and solved by applying Integer programming model
The New Italian Seismic Hazard Model
Marzocchi, W.; Meletti, C.; Albarello, D.; D'Amico, V.; Luzi, L.; Martinelli, F.; Pace, B.; Pignone, M.; Rovida, A.; Visini, F.
2017-12-01
In 2015 the Seismic Hazard Center (Centro Pericolosità Sismica - CPS) of the National Institute of Geophysics and Volcanology was commissioned of coordinating the national scientific community with the aim to elaborate a new reference seismic hazard model, mainly finalized to the update of seismic code. The CPS designed a roadmap for releasing within three years a significantly renewed PSHA model, with regard both to the updated input elements and to the strategies to be followed. The main requirements of the model were discussed in meetings with the experts on earthquake engineering that then will participate to the revision of the building code. The activities were organized in 6 tasks: program coordination, input data, seismicity models, ground motion predictive equations (GMPEs), computation and rendering, testing. The input data task has been selecting the most updated information about seismicity (historical and instrumental), seismogenic faults, and deformation (both from seismicity and geodetic data). The seismicity models have been elaborating in terms of classic source areas, fault sources and gridded seismicity based on different approaches. The GMPEs task has selected the most recent models accounting for their tectonic suitability and forecasting performance. The testing phase has been planned to design statistical procedures to test with the available data the whole seismic hazard models, and single components such as the seismicity models and the GMPEs. In this talk we show some preliminary results, summarize the overall strategy for building the new Italian PSHA model, and discuss in detail important novelties that we put forward. Specifically, we adopt a new formal probabilistic framework to interpret the outcomes of the model and to test it meaningfully; this requires a proper definition and characterization of both aleatory variability and epistemic uncertainty that we accomplish through an ensemble modeling strategy. We use a weighting scheme
Capra, L.; Macías, J. L.; Scott, K. M.; Abrams, M.; Garduño-Monroy, V. H.
2002-03-01
Volcanoes of the Trans-Mexican Volcanic Belt (TMVB) have yielded numerous sector and flank collapses during Pleistocene and Holocene times. Sector collapses associated with magmatic activity have yielded debris avalanches with generally limited runout extent (e.g. Popocatépetl, Jocotitlán, and Colima volcanoes). In contrast, flank collapses (smaller failures not involving the volcano summit), both associated and unassociated with magmatic activity and correlating with intense hydrothermal alteration in ice-capped volcanoes, commonly have yielded highly mobile cohesive debris flows (e.g. Pico de Orizaba and Nevado de Toluca volcanoes). Collapse orientation in the TMVB is preferentially to the south and northeast, probably reflecting the tectonic regime of active E-W and NNW faults. The differing mobilities of the flows transformed from collapses have important implications for hazard assessment. Both sector and flank collapse can yield highly mobile debris flows, but this transformation is more common in the cases of the smaller failures. High mobility is related to factors such as water content and clay content of the failed material, the paleotopography, and the extent of entrainment of sediment during flow (bulking). The ratio of fall height to runout distance commonly used for hazard zonation of debris avalanches is not valid for debris flows, which are more effectively modeled with the relation inundated area to failure or flow volume coupled with the topography of the inundated area.
Automated economic analysis model for hazardous waste minimization
International Nuclear Information System (INIS)
Dharmavaram, S.; Mount, J.B.; Donahue, B.A.
1990-01-01
The US Army has established a policy of achieving a 50 percent reduction in hazardous waste generation by the end of 1992. To assist the Army in reaching this goal, the Environmental Division of the US Army Construction Engineering Research Laboratory (USACERL) designed the Economic Analysis Model for Hazardous Waste Minimization (EAHWM). The EAHWM was designed to allow the user to evaluate the life cycle costs for various techniques used in hazardous waste minimization and to compare them to the life cycle costs of current operating practices. The program was developed in C language on an IBM compatible PC and is consistent with other pertinent models for performing economic analyses. The potential hierarchical minimization categories used in EAHWM include source reduction, recovery and/or reuse, and treatment. Although treatment is no longer an acceptable minimization option, its use is widespread and has therefore been addressed in the model. The model allows for economic analysis for minimization of the Army's six most important hazardous waste streams. These include, solvents, paint stripping wastes, metal plating wastes, industrial waste-sludges, used oils, and batteries and battery electrolytes. The EAHWM also includes a general application which can be used to calculate and compare the life cycle costs for minimization alternatives of any waste stream, hazardous or non-hazardous. The EAHWM has been fully tested and implemented in more than 60 Army installations in the United States
Hazard identification based on plant functional modelling
International Nuclear Information System (INIS)
Rasmussen, B.; Whetton, C.
1993-10-01
A major objective of the present work is to provide means for representing a process plant as a socio-technical system, so as to allow hazard identification at a high level. The method includes technical, human and organisational aspects and is intended to be used for plant level hazard identification so as to identify critical areas and the need for further analysis using existing methods. The first part of the method is the preparation of a plant functional model where a set of plant functions link together hardware, software, operations, work organisation and other safety related aspects of the plant. The basic principle of the functional modelling is that any aspect of the plant can be represented by an object (in the sense that this term is used in computer science) based upon an Intent (or goal); associated with each Intent are Methods, by which the Intent is realized, and Constraints, which limit the Intent. The Methods and Constraints can themselves be treated as objects and decomposed into lower-level Intents (hence the procedure is known as functional decomposition) so giving rise to a hierarchical, object-oriented structure. The plant level hazard identification is carried out on the plant functional model using the Concept Hazard Analysis method. In this, the user will be supported by checklists and keywords and the analysis is structured by pre-defined worksheets. The preparation of the plant functional model and the performance of the hazard identification can be carried out manually or with computer support. (au) (4 tabs., 10 ills., 7 refs.)
A baseline-free procedure for transformation models under interval censorship.
Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin
2005-12-01
An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.
Rose, L.M.; Herrmannsdoerfer, M.; Mazanek, S.; Van Gorp, P.M.E.; Buchwald, S.; Horn, T.; Kalnina, E.; Koch, A.; Lano, K.; Schätz, B.; Wimmer, M.
2014-01-01
We describe the results of the Transformation Tool Contest 2010 workshop, in which nine graph and model transformation tools were compared for specifying model migration. The model migration problem—migration of UML activity diagrams from version 1.4 to version 2.2—is non-trivial and practically
Modeling lahar behavior and hazards
Manville, Vernon; Major, Jon J.; Fagents, Sarah A.
2013-01-01
Lahars are highly mobile mixtures of water and sediment of volcanic origin that are capable of traveling tens to > 100 km at speeds exceeding tens of km hr-1. Such flows are among the most serious ground-based hazards at many volcanoes because of their sudden onset, rapid advance rates, long runout distances, high energy, ability to transport large volumes of material, and tendency to flow along existing river channels where populations and infrastructure are commonly concentrated. They can grow in volume and peak discharge through erosion and incorporation of external sediment and/or water, inundate broad areas, and leave deposits many meters thick. Furthermore, lahars can recur for many years to decades after an initial volcanic eruption, as fresh pyroclastic material is eroded and redeposited during rainfall events, resulting in a spatially and temporally evolving hazard. Improving understanding of the behavior of these complex, gravitationally driven, multi-phase flows is key to mitigating the threat to communities at lahar-prone volcanoes. However, their complexity and evolving nature pose significant challenges to developing the models of flow behavior required for delineating their hazards and hazard zones.
Quantum decoration transformation for spin models
Energy Technology Data Exchange (ETDEWEB)
Braz, F.F.; Rodrigues, F.C.; Souza, S.M. de; Rojas, Onofre, E-mail: ors@dfi.ufla.br
2016-09-15
It is quite relevant the extension of decoration transformation for quantum spin models since most of the real materials could be well described by Heisenberg type models. Here we propose an exact quantum decoration transformation and also showing interesting properties such as the persistence of symmetry and the symmetry breaking during this transformation. Although the proposed transformation, in principle, cannot be used to map exactly a quantum spin lattice model into another quantum spin lattice model, since the operators are non-commutative. However, it is possible the mapping in the “classical” limit, establishing an equivalence between both quantum spin lattice models. To study the validity of this approach for quantum spin lattice model, we use the Zassenhaus formula, and we verify how the correction could influence the decoration transformation. But this correction could be useless to improve the quantum decoration transformation because it involves the second-nearest-neighbor and further nearest neighbor couplings, which leads into a cumbersome task to establish the equivalence between both lattice models. This correction also gives us valuable information about its contribution, for most of the Heisenberg type models, this correction could be irrelevant at least up to the third order term of Zassenhaus formula. This transformation is applied to a finite size Heisenberg chain, comparing with the exact numerical results, our result is consistent for weak xy-anisotropy coupling. We also apply to bond-alternating Ising–Heisenberg chain model, obtaining an accurate result in the limit of the quasi-Ising chain.
Quantum decoration transformation for spin models
International Nuclear Information System (INIS)
Braz, F.F.; Rodrigues, F.C.; Souza, S.M. de; Rojas, Onofre
2016-01-01
It is quite relevant the extension of decoration transformation for quantum spin models since most of the real materials could be well described by Heisenberg type models. Here we propose an exact quantum decoration transformation and also showing interesting properties such as the persistence of symmetry and the symmetry breaking during this transformation. Although the proposed transformation, in principle, cannot be used to map exactly a quantum spin lattice model into another quantum spin lattice model, since the operators are non-commutative. However, it is possible the mapping in the “classical” limit, establishing an equivalence between both quantum spin lattice models. To study the validity of this approach for quantum spin lattice model, we use the Zassenhaus formula, and we verify how the correction could influence the decoration transformation. But this correction could be useless to improve the quantum decoration transformation because it involves the second-nearest-neighbor and further nearest neighbor couplings, which leads into a cumbersome task to establish the equivalence between both lattice models. This correction also gives us valuable information about its contribution, for most of the Heisenberg type models, this correction could be irrelevant at least up to the third order term of Zassenhaus formula. This transformation is applied to a finite size Heisenberg chain, comparing with the exact numerical results, our result is consistent for weak xy-anisotropy coupling. We also apply to bond-alternating Ising–Heisenberg chain model, obtaining an accurate result in the limit of the quasi-Ising chain.
Modeling Compound Flood Hazards in Coastal Embayments
Moftakhari, H.; Schubert, J. E.; AghaKouchak, A.; Luke, A.; Matthew, R.; Sanders, B. F.
2017-12-01
Coastal cities around the world are built on lowland topography adjacent to coastal embayments and river estuaries, where multiple factors threaten increasing flood hazards (e.g. sea level rise and river flooding). Quantitative risk assessment is required for administration of flood insurance programs and the design of cost-effective flood risk reduction measures. This demands a characterization of extreme water levels such as 100 and 500 year return period events. Furthermore, hydrodynamic flood models are routinely used to characterize localized flood level intensities (i.e., local depth and velocity) based on boundary forcing sampled from extreme value distributions. For example, extreme flood discharges in the U.S. are estimated from measured flood peaks using the Log-Pearson Type III distribution. However, configuring hydrodynamic models for coastal embayments is challenging because of compound extreme flood events: events caused by a combination of extreme sea levels, extreme river discharges, and possibly other factors such as extreme waves and precipitation causing pluvial flooding in urban developments. Here, we present an approach for flood risk assessment that coordinates multivariate extreme analysis with hydrodynamic modeling of coastal embayments. First, we evaluate the significance of correlation structure between terrestrial freshwater inflow and oceanic variables; second, this correlation structure is described using copula functions in unit joint probability domain; and third, we choose a series of compound design scenarios for hydrodynamic modeling based on their occurrence likelihood. The design scenarios include the most likely compound event (with the highest joint probability density), preferred marginal scenario and reproduced time series of ensembles based on Monte Carlo sampling of bivariate hazard domain. The comparison between resulting extreme water dynamics under the compound hazard scenarios explained above provides an insight to the
Transformation of UML models to CSP : a case study for graph transformation tools
Varró, D.; Asztalos, M.; Bisztray, D.; Boronat, A.; Dang, D.; Geiß, R.; Greenyer, J.; Van Gorp, P.M.E.; Kniemeyer, O.; Narayanan, A.; Rencis, E.; Weinell, E.; Schürr, A.; Nagl, M.; Zündorf, A.
2008-01-01
Graph transformation provides an intuitive mechanism for capturing model transformations. In the current paper, we investigate and compare various graph transformation tools using a compact practical model transformation case study carried out as part of the AGTIVE 2007 Tool Contest [22]. The aim of
Proportional hazards models of infrastructure system recovery
International Nuclear Information System (INIS)
Barker, Kash; Baroud, Hiba
2014-01-01
As emphasis is being placed on a system's ability to withstand and to recover from a disruptive event, collectively referred to as dynamic resilience, there exists a need to quantify a system's ability to bounce back after a disruptive event. This work applies a statistical technique from biostatistics, the proportional hazards model, to describe (i) the instantaneous rate of recovery of an infrastructure system and (ii) the likelihood that recovery occurs prior to a given point in time. A major benefit of the proportional hazards model is its ability to describe a recovery event as a function of time as well as covariates describing the infrastructure system or disruptive event, among others, which can also vary with time. The proportional hazards approach is illustrated with a publicly available electric power outage data set
The 2014 United States National Seismic Hazard Model
Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Mueller, Charles; Haller, Kathleen; Frankel, Arthur; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen; Boyd, Oliver; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nicolas; Wheeler, Russell; Williams, Robert; Olsen, Anna H.
2015-01-01
New seismic hazard maps have been developed for the conterminous United States using the latest data, models, and methods available for assessing earthquake hazard. The hazard models incorporate new information on earthquake rupture behavior observed in recent earthquakes; fault studies that use both geologic and geodetic strain rate data; earthquake catalogs through 2012 that include new assessments of locations and magnitudes; earthquake adaptive smoothing models that more fully account for the spatial clustering of earthquakes; and 22 ground motion models, some of which consider more than double the shaking data applied previously. Alternative input models account for larger earthquakes, more complicated ruptures, and more varied ground shaking estimates than assumed in earlier models. The ground motions, for levels applied in building codes, differ from the previous version by less than ±10% over 60% of the country, but can differ by ±50% in localized areas. The models are incorporated in insurance rates, risk assessments, and as input into the U.S. building code provisions for earthquake ground shaking.
WAVELET TRANSFORM AND LIP MODEL
Directory of Open Access Journals (Sweden)
Guy Courbebaisse
2011-05-01
Full Text Available The Fourier transform is well suited to the study of stationary functions. Yet, it is superseded by the Wavelet transform for the powerful characterizations of function features such as singularities. On the other hand, the LIP (Logarithmic Image Processing model is a mathematical framework developed by Jourlin and Pinoli, dedicated to the representation and processing of gray tones images called hereafter logarithmic images. This mathematically well defined model, comprising a Fourier Transform "of its own", provides an effective tool for the representation of images obtained by transmitted light, such as microscope images. This paper presents a Wavelet transform within the LIP framework, with preservation of the classical Wavelet Transform properties. We show that the fast computation algorithm due to Mallat can be easily used. An application is given for the detection of crests.
VMTL: a language for end-user model transformation
DEFF Research Database (Denmark)
Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel
2016-01-01
, these languages are largely ill-equipped for adoption by end-user modelers in areas such as requirements engineering, business process management, or enterprise architecture. We aim to introduce a model transformation language addressing the skills and requirements of end-user modelers. With this contribution, we......Model transformation is a key enabling technology of Model-Driven Engineering (MDE). Existing model transformation languages are shaped by and for MDE practitioners—a user group with needs and capabilities which are not necessarily characteristic of modelers in general. Consequently...... hope to broaden the application scope of model transformation and MDE technology in general. We discuss the profile of end-user modelers and propose a set of design guidelines for model transformation languages addressing them. We then introduce Visual Model Transformation Language (VMTL) following...
Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy
Directory of Open Access Journals (Sweden)
J. Blahut
2010-11-01
Full Text Available Debris flow hazard modelling at medium (regional scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal, and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy. The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R, developed at the University of Lausanne (Switzerland. An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise
International Nuclear Information System (INIS)
Kahia, S.; Brinkman, H.; Bareith, A.; Siklossy, T.; Vinot, T.; Mateescu, T.; Espargilliere, J.; Burgazzi, L.; Ivanov, I.; Bogdanov, D.; Groudev, P.; Ostapchuk, S.; Zhabin, O.; Stojka, T.; Alzbutas, R.; Kumar, M.; Nitoi, M.; Farcasiu, M.; Borysiewicz, M.; Kowal, K.; Potempski, S.
2016-01-01
The goal of this report is to provide guidance on practices to model man-made hazards (mainly external fires and explosions) and accidental aircraft crash hazards and implement them in extended Level 1 PSA. This report is a joint deliverable of work package 21 (WP21) and work package 22 (WP22). The general objective of WP21 is to provide guidance on all of the individual hazards selected at the first ASAMPSA-E End Users Workshop (May 2014, Uppsala, Sweden). The objective of WP22 is to provide the solutions for purposes of different parts of man-made hazards Level 1 PSA fulfilment. This guidance is focusing on man-made hazards, namely: external fires and explosions, and accidental aircraft crash hazards. Guidance developed refers to existing guidance whenever possible. The initial part of guidance (WP21 part) reflects current practices to assess the frequencies for each type of hazards or combination of hazards (including correlated hazards) as initiating event for PSAs. The sources and quality of hazard data, the elements of hazard assessment methodologies and relevant examples are discussed. Classification and criteria to properly assess hazard combinations as well as examples and methods for assessment of these combinations are included in this guidance. In appendixes additional material is presented with the examples of practical approaches to aircraft crash and man-made hazard. The following issues are addressed: 1) Hazard assessment methodologies, including issues related to hazard combinations. 2) Modelling equipment of safety related SSC, 3) HRA, 4) Emergency response, 5) Multi-unit issues. Recommendations and also limitations, gaps identified in the existing methodologies and a list of open issues are included. At all stages of this guidance and especially from an industrial end-user perspective, one must keep in mind that the development of man-made hazards probabilistic analysis must be conditioned to the ability to ultimately obtain a representative risk
Level Design as Model Transformation
Dormans, Joris
2011-01-01
This paper frames the process of designing a level in a game as a series of model transformations. The transformations correspond to the application of particular design principles, such as the use of locks and keys to transform a linear mission into a branching space. It shows that by using rewrite
Modelling the pulse transformer in SPICE
International Nuclear Information System (INIS)
Godlewska, Malgorzata; Górecki, Krzysztof; Górski, Krzysztof
2016-01-01
The paper is devoted to modelling pulse transformers in SPICE. It shows the character of the selected models of this element, points out their advantages and disadvantages, and presents the results of experimental verification of the considered models. These models are characterized by varying degrees of complexity - from linearly coupled linear coils to nonlinear electrothermal models. The study was conducted for transformer with ring cores made of a variety of ferromagnetic materials, while exciting the sinusoidal signal of a frequency 100 kHz and different values of load resistance. The transformers operating conditions under which the considered models ensure the acceptable accuracy of calculations are indicated
Conceptual Development of a National Volcanic Hazard Model for New Zealand
Stirling, Mark; Bebbington, Mark; Brenna, Marco; Cronin, Shane; Christophersen, Annemarie; Deligne, Natalia; Hurst, Tony; Jolly, Art; Jolly, Gill; Kennedy, Ben; Kereszturi, Gabor; Lindsay, Jan; Neall, Vince; Procter, Jonathan; Rhoades, David; Scott, Brad; Shane, Phil; Smith, Ian; Smith, Richard; Wang, Ting; White, James D. L.; Wilson, Colin J. N.; Wilson, Tom
2017-06-01
We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g. short-term forecasting). The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.
Conceptual Development of a National Volcanic Hazard Model for New Zealand
Directory of Open Access Journals (Sweden)
Mark Stirling
2017-06-01
Full Text Available We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration, and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g., short-term forecasting. The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.
Agent-based Modeling with MATSim for Hazards Evacuation Planning
Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.
2015-12-01
Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.
Transforming Graphical System Models to Graphical Attack Models
DEFF Research Database (Denmark)
Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, Rene Rydhof
2016-01-01
Manually identifying possible attacks on an organisation is a complex undertaking; many different factors must be considered, and the resulting attack scenarios can be complex and hard to maintain as the organisation changes. System models provide a systematic representation of organisations...... approach to transforming graphical system models to graphical attack models in the form of attack trees. Based on an asset in the model, our transformations result in an attack tree that represents attacks by all possible actors in the model, after which the actor in question has obtained the asset....
Costa, Antonio
2016-04-01
Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.
A Bimodal Hybrid Model for Time-Dependent Probabilistic Seismic Hazard Analysis
Yaghmaei-Sabegh, Saman; Shoaeifar, Nasser; Shoaeifar, Parva
2018-03-01
The evaluation of evidence provided by geological studies and historical catalogs indicates that in some seismic regions and faults, multiple large earthquakes occur in cluster. Then, the occurrences of large earthquakes confront with quiescence and only the small-to-moderate earthquakes take place. Clustering of large earthquakes is the most distinguishable departure from the assumption of constant hazard of random occurrence of earthquakes in conventional seismic hazard analysis. In the present study, a time-dependent recurrence model is proposed to consider a series of large earthquakes that occurs in clusters. The model is flexible enough to better reflect the quasi-periodic behavior of large earthquakes with long-term clustering, which can be used in time-dependent probabilistic seismic hazard analysis with engineering purposes. In this model, the time-dependent hazard results are estimated by a hazard function which comprises three parts. A decreasing hazard of last large earthquake cluster and an increasing hazard of the next large earthquake cluster, along with a constant hazard of random occurrence of small-to-moderate earthquakes. In the final part of the paper, the time-dependent seismic hazard of the New Madrid Seismic Zone at different time intervals has been calculated for illustrative purpose.
A New Seismic Hazard Model for Mainland China
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.
2017-12-01
We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.
Modeling and Hazard Analysis Using STPA
Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka
2010-09-01
A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis
MISTRAL : A Language for Model Transformations in the MOF Meta-modeling Architecture
Kurtev, Ivan; van den Berg, Klaas; Aßmann, Uwe; Aksit, Mehmet; Rensink, Arend
2005-01-01
n the Meta Object Facility (MOF) meta-modeling architecture a number of model transformation scenarios can be identified. It could be expected that a meta-modeling architecture will be accompanied by a transformation technology supporting the model transformation scenarios in a uniform way. Despite
Quantitative occupational risk model: Single hazard
International Nuclear Information System (INIS)
Papazoglou, I.A.; Aneziris, O.N.; Bellamy, L.J.; Ale, B.J.M.; Oh, J.
2017-01-01
A model for the quantification of occupational risk of a worker exposed to a single hazard is presented. The model connects the working conditions and worker behaviour to the probability of an accident resulting into one of three types of consequence: recoverable injury, permanent injury and death. Working conditions and safety barriers in place to reduce the likelihood of an accident are included. Logical connections are modelled through an influence diagram. Quantification of the model is based on two sources of information: a) number of accidents observed over a period of time and b) assessment of exposure data of activities and working conditions over the same period of time and the same working population. Effectiveness of risk reducing measures affecting the working conditions, worker behaviour and/or safety barriers can be quantified through the effect of these measures on occupational risk. - Highlights: • Quantification of occupational risk from a single hazard. • Influence diagram connects working conditions, worker behaviour and safety barriers. • Necessary data include the number of accidents and the total exposure of worker • Effectiveness of risk reducing measures is quantified through the impact on the risk • An example illustrates the methodology.
The 2014 update to the National Seismic Hazard Model in California
Powers, Peter; Field, Edward H.
2015-01-01
The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.
Wang, Wei; Albert, Jeffrey M
2017-08-01
An important problem within the social, behavioral, and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model in this paper. We give precise definitions of the total effect, natural indirect effect, and natural direct effect in terms of the survival probability, hazard function, and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log cumulative hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson Heart Study data and conduct sensitivity analysis to assess the impact on the mediation effects inference when the no unmeasured mediator-outcome confounding assumption is violated.
Design Transformations for Rule-based Procedural Modeling
Lienhard, Stefan; Lau, Cheryl; Mü ller, Pascal; Wonka, Peter; Pauly, Mark
2017-01-01
We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.
Design Transformations for Rule-based Procedural Modeling
Lienhard, Stefan
2017-05-24
We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.
Foundations for Streaming Model Transformations by Complex Event Processing.
Dávid, István; Ráth, István; Varró, Dániel
2018-01-01
Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.
Integrate urban‐scale seismic hazard analyses with the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Luco, Nicolas; Frankel, Arthur; Petersen, Mark D.; Aagaard, Brad T.; Baltay, Annemarie S.; Blanpied, Michael; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.; Graves, Robert; Hartzell, Stephen; Rezaeian, Sanaz; Stephenson, William J.; Wald, David J.; Williams, Robert A.; Withers, Kyle
2018-01-01
For more than 20 yrs, damage patterns and instrumental recordings have highlighted the influence of the local 3D geologic structure on earthquake ground motions (e.g., M">M 6.7 Northridge, California, Gao et al., 1996; M">M 6.9 Kobe, Japan, Kawase, 1996; M">M 6.8 Nisqually, Washington, Frankel, Carver, and Williams, 2002). Although this and other local‐scale features are critical to improving seismic hazard forecasts, historically they have not been explicitly incorporated into the U.S. National Seismic Hazard Model (NSHM, national model and maps), primarily because the necessary basin maps and methodologies were not available at the national scale. Instead,...
Complete hazard ranking to analyze right-censored data: An ALS survival study.
Huang, Zhengnan; Zhang, Hongjiu; Boss, Jonathan; Goutman, Stephen A; Mukherjee, Bhramar; Dinov, Ivo D; Guan, Yuanfang
2017-12-01
Survival analysis represents an important outcome measure in clinical research and clinical trials; further, survival ranking may offer additional advantages in clinical trials. In this study, we developed GuanRank, a non-parametric ranking-based technique to transform patients' survival data into a linear space of hazard ranks. The transformation enables the utilization of machine learning base-learners including Gaussian process regression, Lasso, and random forest on survival data. The method was submitted to the DREAM Amyotrophic Lateral Sclerosis (ALS) Stratification Challenge. Ranked first place, the model gave more accurate ranking predictions on the PRO-ACT ALS dataset in comparison to Cox proportional hazard model. By utilizing right-censored data in its training process, the method demonstrated its state-of-the-art predictive power in ALS survival ranking. Its feature selection identified multiple important factors, some of which conflicts with previous studies.
Sustainability-Based Flood Hazard Mapping of the Swannanoa River Watershed
Directory of Open Access Journals (Sweden)
Ebrahim Ahmadisharaf
2017-09-01
Full Text Available An integrated framework is presented for sustainability-based flood hazard mapping of the Swannanoa River watershed in the state of North Carolina, U.S. The framework uses a hydrologic model for rainfall–runoff transformation, a two-dimensional unsteady hydraulic model flood simulation and a GIS-based multi-criteria decision-making technique for flood hazard mapping. Economic, social, and environmental flood hazards are taken into account. The importance of each hazard is quantified through a survey to the experts. Utilizing the proposed framework, sustainability-based flood hazard mapping is performed for the 100-year design event. As a result, the overall flood hazard is provided in each geographic location. The sensitivity of the overall hazard with respect to the weights of the three hazard components were also investigated. While the conventional flood management approach is to assess the environmental impacts of mitigation measures after a set of feasible options are selected, the presented framework incorporates the environmental impacts into the analysis concurrently with the economic and social influences. Thereby, it provides a more sustainable perspective of flood management and can greatly help the decision makers to make better-informed decisions by clearly understanding the impacts of flooding on economy, society and environment.
Opinion: The use of natural hazard modeling for decision making under uncertainty
David E. Calkin; Mike Mentis
2015-01-01
Decision making to mitigate the effects of natural hazards is a complex undertaking fraught with uncertainty. Models to describe risks associated with natural hazards have proliferated in recent years. Concurrently, there is a growing body of work focused on developing best practices for natural hazard modeling and to create structured evaluation criteria for complex...
Directory of Open Access Journals (Sweden)
I. V. Novash
2015-01-01
Full Text Available This article describes the parameters calculation for the three-phase two-winding power transformer model taken from the SimPowerSystems library, which is the part of the MatLab- Simulink environment. Presented methodology is based on the power transformer nameplate data usage. Particular attention is paid to the power transformer magnetization curve para- meters calculation. The methodology of the three-phase two-winding power transformer model parameters calculation considering the magnetization curve nonlinearity isn’t presented in Russian-and English-language sources. Power transformers demo models described in the SimPowerSystems user’s guide have already calculated parameters, but without reference to the sources of their determination. A power transformer is a nonlinear element of the power system, that’s why for its performance analysis in different modes of operation is necessary to have the magnetization curve parameters.The process analysis during no-load energizing of the power transformer is of special interest. This regime is accompanied by the inrush current on the supply side of the power transformer, which is several times larger than the transformer rated current. Sharp rising of the magnetizing current is explained by the magnetic core saturation. Therefore, magnetiza- tion characteristic accounting during transformer no-load energizing modeling is a mandatory requirement. Article authors attempt to put all calculating formulas in a more convenient form and validate the power transformer nonlinear magnetization characteristics parameters calcu- lation. Inrush current oscillograms obtained during the simulation experiment confirmed the adequacy of the calculated model parameters.
Six Sigma Driven Enterprise Model Transformation
Directory of Open Access Journals (Sweden)
Raymond Vella
2009-10-01
Full Text Available Enterprise architecture methods provide a structured system to understand enterprise activities. However, existing enterprise modelling methodologies take static views of the enterprise and do not naturally lead to a path of improvement during enterprise model transformation. This paper discusses the need for a methodology to facilitate changes for improvement in an enterprise. The six sigma methodology is proposed as the tool to facilitate progressive and continual Enterprise Model Transformation to allow businesses to adapt to meet increased customer expectation and global competition. An alignment of six sigma with phases of GERAM life cycle is described with inclusion of Critical-To-Satisfaction (CTS requirements. The synergies of combining the two methodologies are presented in an effort to provide a more culturally embedded framework for Enterprise Model Transformation that builds on the success of six sigma.
Bayes estimation of the general hazard rate model
International Nuclear Information System (INIS)
Sarhan, A.
1999-01-01
In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2
Modelling Multi Hazard Mapping in Semarang City Using GIS-Fuzzy Method
Nugraha, A. L.; Awaluddin, M.; Sasmito, B.
2018-02-01
One important aspect of disaster mitigation planning is hazard mapping. Hazard mapping can provide spatial information on the distribution of locations that are threatened by disaster. Semarang City as the capital of Central Java Province is one of the cities with high natural disaster intensity. Frequent natural disasters Semarang city is tidal flood, floods, landslides, and droughts. Therefore, Semarang City needs spatial information by doing multi hazard mapping to support disaster mitigation planning in Semarang City. Multi Hazards map modelling can be derived from parameters such as slope maps, rainfall, land use, and soil types. This modelling is done by using GIS method with scoring and overlay technique. However, the accuracy of modelling would be better if the GIS method is combined with Fuzzy Logic techniques to provide a good classification in determining disaster threats. The Fuzzy-GIS method will build a multi hazards map of Semarang city can deliver results with good accuracy and with appropriate threat class spread so as to provide disaster information for disaster mitigation planning of Semarang city. from the multi-hazard modelling using GIS-Fuzzy can be known type of membership that has a good accuracy is the type of membership Gauss with RMSE of 0.404 the smallest of the other membership and VAF value of 72.909% of the largest of the other membership.
Directory of Open Access Journals (Sweden)
V.V. Vasilevskij
2016-06-01
Full Text Available Introduction. An important problem in power transformers resource prognosis is the formation of moisture dynamics trends of transformer insulation. Purpose. Increasing the accuracy of power transformer insulation resource assessment based on accounting of moisture dynamics in interrelation with temperature dynamics. Working out of moisture dynamics model in paper insulation-transformer oil system in conjunction with thermodynamic model, load model and technical maintenance model. Methodology. The mathematical models used for describe the moisture dynamics are grounded on nonlinear differential equations. Interrelation moisture dynamics model with thermodynamic, load and technical maintenance models described by UML model. For confirming the adequacy of model used computer simulation. Results. We have implemented the model of moisture dynamics in power transformers insulation in interrelation with other models, which describe the state of power transformer in operation. The proposed model allows us to form detailed trends of moisture dynamics in power transformers insulation basing on monitoring data or power transformers operational factors simulation results. We have performed computer simulation of moisture exchange processes and calculation of transformer insulation resource for different moisture trends. Originality. The offered model takes into account moisture dynamics in power transformers insulation under the influence of changes of the power transformers thermal mode and operational factors. Practical value. The offered model can be used in power transformers monitoring systems for automation of resource assessment of oil-immersed power transformers paper insulation at different phase of lifecycle. Model also can be used for assessment of projected economic efficiency of power transformers exploitation in projected operating conditions.
CalTOX, a multimedia total exposure model for hazardous-waste sites; Part 1, Executive summary
Energy Technology Data Exchange (ETDEWEB)
McKone, T.E.
1993-06-01
CalTOX has been developed as a spreadsheet model to assist in health-risk assessments that address contaminated soils and the contamination of adjacent air, surface water, sediments, and ground water. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify and reduce uncertainty in multimedia, multiple-pathway exposure models. This report provides an overview of the CalTOX model components, lists the objectives of the model, describes the philosophy under which the model was developed, identifies the chemical classes for which the model can be used, and describes critical sensitivities and uncertainties. The multimedia transport and transformation model is a dynamic model that can be used to assess time-varying concentrations of contaminants introduced initially to soil layers or for contaminants released continuously to air or water. This model assists the user in examining how chemical and landscape properties impact both the ultimate route and quantity of human contact. Multimedia, multiple pathway exposure models are used in the CalTOX model to estimate average daily potential doses within a human population in the vicinity of a hazardous substances release site. The exposure models encompass twenty-three exposure pathways. The exposure assessment process consists of relating contaminant concentrations in the multimedia model compartments to contaminant concentrations in the media with which a human population has contact (personal air, tap water, foods, household dusts soils, etc.). The average daily dose is the product of the exposure concentrations in these contact media and an intake or uptake factor that relates the concentrations to the distributions of potential dose within the population.
Global river flood hazard maps: hydraulic modelling methods and appropriate uses
Townend, Samuel; Smith, Helen; Molloy, James
2014-05-01
Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some
Energy Technology Data Exchange (ETDEWEB)
Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.; Nozick, Linda Karen; Prince, Michael
2013-09-01
Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research & Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorists actions based on assumptions regarding their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.
Kinetics model of bainitic transformation with stress
Zhou, Mingxing; Xu, Guang; Hu, Haijiang; Yuan, Qing; Tian, Junyu
2018-01-01
Thermal simulations were conducted on a Gleeble 3800 simulator. The main purpose is to investigate the effects of stress on the kinetics of bainitic transformation in a Fe-C-Mn-Si advanced high strength bainitic steel. Previous studies on modeling the kinetics of stress affected bainitic transformation only considered the stress below the yield strength of prior austenite. In the present study, the stress above the yield strength of prior austenite is taken into account. A new kinetics model of bainitic transformation dependent on the stress (including the stresses below and above the yield strength of prior austenite) and the transformation temperature is proposed. The new model presents a good agreement with experimental results. In addition, it is found that the acceleration degree of stress on bainitic transformation increases with the stress whether its magnitude is below or above the yield strength of austenite, but the increasing rate gradually slows down when the stress is above the yield strength of austenite.
DEFF Research Database (Denmark)
Greenyer, Joel; Kindler, Ekkart
2010-01-01
and for model-based software engineering approaches in general. QVT (Query/View/Transformation) is the transformation technology recently proposed for this purpose by the OMG. TGGs (Triple Graph Grammars) are another transformation technology proposed in the mid-nineties, used for example in the FUJABA CASE...
Complete hazard ranking to analyze right-censored data: An ALS survival study.
Directory of Open Access Journals (Sweden)
Zhengnan Huang
2017-12-01
Full Text Available Survival analysis represents an important outcome measure in clinical research and clinical trials; further, survival ranking may offer additional advantages in clinical trials. In this study, we developed GuanRank, a non-parametric ranking-based technique to transform patients' survival data into a linear space of hazard ranks. The transformation enables the utilization of machine learning base-learners including Gaussian process regression, Lasso, and random forest on survival data. The method was submitted to the DREAM Amyotrophic Lateral Sclerosis (ALS Stratification Challenge. Ranked first place, the model gave more accurate ranking predictions on the PRO-ACT ALS dataset in comparison to Cox proportional hazard model. By utilizing right-censored data in its training process, the method demonstrated its state-of-the-art predictive power in ALS survival ranking. Its feature selection identified multiple important factors, some of which conflicts with previous studies.
Research on lightning stroke model and characteristics of electronic transformer
Directory of Open Access Journals (Sweden)
Li Mu
2018-01-01
Full Text Available In order to improve the reliability of power supply, a large number of electronic voltage and current transformers are used in digital substations. In this paper, the mathematical model of the electronic transformer is analyzed firstly, and its circuit model is given. According to the difference of working characteristics between voltage transformer and current transformer, the circuit model of voltage type electronic transformer and current type electronic transformer is given respectively. By analyzing their broadband transmission characteristics, the accuracy of the model is verified, and their lightning analysis models are obtained.
Metrics for analyzing the quality of model transformations
Amstel, van M.F.; Lange, C.F.J.; Brand, van den M.G.J.; Falcone, G.; Guéhéneuc, Y.G.; Lange, C.F.J.; Porkoláb, Z.; Sahraoui, H.A.
2008-01-01
Model transformations become increasingly important with the emergence of model driven engineering of, amongst others, objectoriented software systems. It is therefore necessary to define and evaluate the quality of model transformations. The goal of our research is to make the quality of model
Model based analysis of piezoelectric transformers.
Hemsel, T; Priya, S
2006-12-22
Piezoelectric transformers are increasingly getting popular in the electrical devices owing to several advantages such as small size, high efficiency, no electromagnetic noise and non-flammable. In addition to the conventional applications such as ballast for back light inverter in notebook computers, camera flash, and fuel ignition several new applications have emerged such as AC/DC converter, battery charger and automobile lighting. These new applications demand high power density and wide range of voltage gain. Currently, the transformer power density is limited to 40 W/cm(3) obtained at low voltage gain. The purpose of this study was to investigate a transformer design that has the potential of providing higher power density and wider range of voltage gain. The new transformer design utilizes radial mode both at the input and output port and has the unidirectional polarization in the ceramics. This design was found to provide 30 W power with an efficiency of 98% and 30 degrees C temperature rise from the room temperature. An electro-mechanical equivalent circuit model was developed to describe the characteristics of the piezoelectric transformer. The model was found to successfully predict the characteristics of the transformer. Excellent matching was found between the computed and experimental results. The results of this study will allow to deterministically design unipoled piezoelectric transformers with specified performance. It is expected that in near future the unipoled transformer will gain significant importance in various electrical components.
A discrete dislocation–transformation model for austenitic single crystals
International Nuclear Information System (INIS)
Shi, J; Turteltaub, S; Remmers, J J C; Van der Giessen, E
2008-01-01
A discrete model for analyzing the interaction between plastic flow and martensitic phase transformations is developed. The model is intended for simulating the microstructure evolution in a single crystal of austenite that transforms non-homogeneously into martensite. The plastic flow in the untransformed austenite is simulated using a plane-strain discrete dislocation model. The phase transformation is modeled via the nucleation and growth of discrete martensitic regions embedded in the austenitic single crystal. At each instant during loading, the coupled elasto-plasto-transformation problem is solved using the superposition of analytical solutions for the discrete dislocations and discrete transformation regions embedded in an infinite homogeneous medium and the numerical solution of a complementary problem used to enforce the actual boundary conditions and the heterogeneities in the medium. In order to describe the nucleation and growth of martensitic regions, a nucleation criterion and a kinetic law suitable for discrete regions are specified. The constitutive rules used in discrete dislocation simulations are supplemented with additional evolution rules to account for the phase transformation. To illustrate the basic features of the model, simulations of specimens under plane-strain uniaxial extension and contraction are analyzed. The simulations indicate that plastic flow reduces the average stress at which transformation begins, but it also reduces the transformation rate when compared with benchmark simulations without plasticity. Furthermore, due to local stress fluctuations caused by dislocations, martensitic systems can be activated even though transformation would not appear to be favorable based on the average stress. Conversely, the simulations indicate that the plastic hardening behavior is influenced by the reduction in the effective austenitic grain size due to the evolution of transformation. During cyclic simulations, the coupled plasticity-transformation
Acceleration transforms and statistical kinetic models
International Nuclear Information System (INIS)
LuValle, M.J.; Welsher, T.L.; Svoboda, K.
1988-01-01
For a restricted class of problems a mathematical model of microscopic degradation processes, statistical kinetics, is developed and linked through acceleration transforms to the information which can be obtained from a system in which the only observable sign of degradation is sudden and catastrophic failure. The acceleration transforms were developed in accelerated life testing applications as a tool for extrapolating from the observable results of an accelerated life test to the dynamics of the underlying degradation processes. A particular concern of a physicist attempting to interpreted the results of an analysis based on acceleration transforms is determining the physical species involved in the degradation process. These species may be (a) relatively abundant or (b) relatively rare. The main results of this paper are a theorem showing that for an important subclass of statistical kinetic models, acceleration transforms cannot be used to distinguish between cases a and b, and an example showing that in some cases falling outside the restrictions of the theorem, cases a and b can be distinguished by their acceleration transforms
Reliability Model of Power Transformer with ONAN Cooling
M. Sefidgaran; M. Mirzaie; A. Ebrahimzadeh
2010-01-01
Reliability of a power system is considerably influenced by its equipments. Power transformers are one of the most critical and expensive equipments of a power system and their proper functions are vital for the substations and utilities. Therefore, reliability model of power transformer is very important in the risk assessment of the engineering systems. This model shows the characteristics and functions of a transformer in the power system. In this paper the reliability model...
Religiousness and hazardous alcohol use: a conditional indirect effects model.
Jankowski, Peter J; Hardy, Sam A; Zamboanga, Byron L; Ham, Lindsay S
2013-08-01
The current study examined a conditional indirect effects model of the association between religiousness and adolescents' hazardous alcohol use. In doing so, we responded to the need to include both mediators and moderators, and the need for theoretically informed models when examining religiousness and adolescents' alcohol use. The sample consisted of 383 adolescents, aged 15-18, who completed an online questionnaire. Results of structural equation modeling supported the proposed model. Religiousness was indirectly associated with hazardous alcohol use through both positive alcohol expectancy outcomes and negative alcohol expectancy valuations. Significant moderating effects for alcohol expectancy valuations on the association between alcohol expectancies and alcohol use were also found. The effects for alcohol expectancy valuations confirm valuations as a distinct construct to that of alcohol expectancy outcomes, and offer support for the protective role of internalized religiousness on adolescents' hazardous alcohol use as a function of expectancy valuations. Copyright © 2013 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Rockfall hazard analysis using LiDAR and spatial modeling
Lan, Hengxing; Martin, C. Derek; Zhou, Chenghu; Lim, Chang Ho
2010-05-01
Rockfalls have been significant geohazards along the Canadian Class 1 Railways (CN Rail and CP Rail) since their construction in the late 1800s. These rockfalls cause damage to infrastructure, interruption of business, and environmental impacts, and their occurrence varies both spatially and temporally. The proactive management of these rockfall hazards requires enabling technologies. This paper discusses a hazard assessment strategy for rockfalls along a section of a Canadian railway using LiDAR and spatial modeling. LiDAR provides accurate topographical information of the source area of rockfalls and along their paths. Spatial modeling was conducted using Rockfall Analyst, a three dimensional extension to GIS, to determine the characteristics of the rockfalls in terms of travel distance, velocity and energy. Historical rockfall records were used to calibrate the physical characteristics of the rockfall processes. The results based on a high-resolution digital elevation model from a LiDAR dataset were compared with those based on a coarse digital elevation model. A comprehensive methodology for rockfall hazard assessment is proposed which takes into account the characteristics of source areas, the physical processes of rockfalls and the spatial attribution of their frequency and energy.
The Drivers of Success in Business Model Transformation
Directory of Open Access Journals (Sweden)
Nenad Savič
2016-01-01
Full Text Available Existing empirical literature on business models is still inconclusive about the key drivers of successful business model transformation. The paper explores this issue by using a single longitudinal case study design in combination with grounded theory approach on a medium-sized, high-tech and globally oriented company. Based on on-site visits, interviews and secondary documentation data analysis, the study identifies six generic drivers of successful business model transformation: transformational leadership, discovery driven decision-making, industry improvement – customer specific orientation, content-oriented communication, self-initiative collaborators, and phased separation strategy. The new drivers supplement our existing knowledge on how successful transformation takes place and add to existing drivers, while extensive discussion of their implications may help the managers to execute business transformations more effectively.
Modeling of Marine Natural Hazards in the Lesser Antilles
Zahibo, Narcisse; Nikolkina, Irina; Pelinovsky, Efim
2010-05-01
The Caribbean Sea countries are often affected by various marine natural hazards: hurricanes and cyclones, tsunamis and flooding. The historical data of marine natural hazards for the Lesser Antilles and specially, for Guadeloupe are presented briefly. Numerical simulation of several historical tsunamis in the Caribbean Sea (1755 Lisbon trans-Atlantic tsunami, 1867 Virgin Island earthquake tsunami, 2003 Montserrat volcano tsunami) are performed within the framework of the nonlinear-shallow theory. Numerical results demonstrate the importance of the real bathymetry variability with respect to the direction of propagation of tsunami wave and its characteristics. The prognostic tsunami wave height distribution along the Caribbean Coast is computed using various forms of seismic and hydrodynamics sources. These results are used to estimate the far-field potential for tsunami hazards at coastal locations in the Caribbean Sea. The nonlinear shallow-water theory is also applied to model storm surges induced by tropical cyclones, in particular, cyclones "Lilli" in 2002 and "Dean" in 2007. Obtained results are compared with observed data. The numerical models have been tested against known analytical solutions of the nonlinear shallow-water wave equations. Obtained results are described in details in [1-7]. References [1] N. Zahibo and E. Pelinovsky, Natural Hazards and Earth System Sciences, 1, 221 (2001). [2] N. Zahibo, E. Pelinovsky, A. Yalciner, A. Kurkin, A. Koselkov and A. Zaitsev, Oceanologica Acta, 26, 609 (2003). [3] N. Zahibo, E. Pelinovsky, A. Kurkin and A. Kozelkov, Science Tsunami Hazards. 21, 202 (2003). [4] E. Pelinovsky, N. Zahibo, P. Dunkley, M. Edmonds, R. Herd, T. Talipova, A. Kozelkov and I. Nikolkina, Science of Tsunami Hazards, 22, 44 (2004). [5] N. Zahibo, E. Pelinovsky, E. Okal, A. Yalciner, C. Kharif, T. Talipova and A. Kozelkov, Science of Tsunami Hazards, 23, 25 (2005). [6] N. Zahibo, E. Pelinovsky, T. Talipova, A. Rabinovich, A. Kurkin and I
Fourier series models through transformation | Omekara | Global ...
African Journals Online (AJOL)
As a result, the square transformation which outperforms the others is adopted. Consequently, each of the multiplicative and additive FSA models fitted to the transformed data are then subjected to a test for white noise based on spectral analysis. The result of this test shows that only the multiplicative model is adequate.
Nonlinear Model of Tape Wound Core Transformers
Directory of Open Access Journals (Sweden)
A. Vahedi
2015-03-01
Full Text Available Recently, tape wound cores due to their excellent magnetic properties, are widely used in different types of transformers. Performance prediction of these transformers needs an accurate model with ability to determine flux distribution within the core and magnetic loss. Spiral structure of tape wound cores affects the flux distribution and always cause complication of analysis. In this paper, a model based on reluctance networks method is presented for analysis of magnetic flux in wound cores. Using this model, distribution of longitudinal and transverse fluxes within the core can be determined. To consider the nonlinearity of the core, a dynamic hysteresis model is included in the presented model. Having flux density in different points of the core, magnetic losses can be calculated. To evaluate the validity of the model, results are compared with 2-D FEM simulations. In addition, a transformer designed for series-resonant converter and simulation results are compared with experimental measurements. Comparisons show accuracy of the model besides simplicity and fast convergence
Has the Nordic Welfare Model Been Transformed?
DEFF Research Database (Denmark)
Greve, Bent; Kvist, Jon
2011-01-01
The Nordic welfare model is undergoing a fundamental transformation. Using Denmark we show how a universal welfare state model is gradually being transformed into an emergent multi-tiered welfare state. Whereas the Danish pension system's having become multi-tiered in the 1990s, with private...... and the sick. Although Denmark still offers universal coverage in core welfare state areas, the increased use of occupational and fiscal welfare as well as changes in public schemes has gradually transformed the nation into a multi-tiered welfare state that is more dualistic and individualistic...
Geospatial subsidence hazard modelling at Sterkfontein Caves ...
African Journals Online (AJOL)
The geo-hazard subsidence model includes historic subsidence occurrances, terrain (water flow) and water accumulation. Water accumulating on the surface will percolate and reduce the strength of the soil mass, possibly inducing subsidence. Areas for further geotechnical investigation are identified, demonstrating that a ...
Oil transformation sector modelling: price interactions
International Nuclear Information System (INIS)
Maurer, A.
1992-01-01
A global oil and oil product prices evolution model is proposed that covers the transformation sector incidence and the final user price establishment together with price interactions between gaseous and liquid hydrocarbons. High disparities among oil product prices in the various consumer zones (North America, Western Europe, Japan) are well described and compared with the low differences between oil supply prices in these zones. Final user price fluctuations are shown to be induced by transformation differences and competition; natural gas market is also modelled
Elsässer, Thilo
Exposure to radiation of high-energy and highly charged ions (HZE) causes a major risk to human beings, since in long term space explorations about 10 protons per month and about one HZE particle per month hit each cell nucleus (1). Despite the larger number of light ions, the high ionisation power of HZE particles and its corresponding more complex damage represents a major hazard for astronauts. Therefore, in order to get a reasonable risk estimate, it is necessary to take into account the entire mixed radiation field. Frequently, neoplastic cell transformation serves as an indicator for the oncogenic potential of radiation exposure. It can be measured for a small number of ion and energy combinations. However, due to the complexity of the radiation field it is necessary to know the contribution to the radiation damage of each ion species for the entire range of energies. Therefore, a model is required which transfers the few experimental data to other particles with different LETs. We use the Local Effect Model (LEM) (2) with its cluster extension (3) to calculate the relative biological effectiveness (RBE) of neoplastic transformation. It was originally developed in the framework of hadrontherapy and is applicable for a large range of ions and energies. The input parameters for the model include the linear-quadratic parameters for the induction of lethal events as well as for the induction of transformation events per surviving cell. Both processes of cell inactivation and neoplastic transformation per viable cell are combined to eventually yield the RBE for cell transformation. We show that the Local Effect Model is capable of predicting the RBE of neoplastic cell transformation for a broad range of ions and energies. The comparison of experimental data (4) with model calculations shows a reasonable agreement. We find that the cluster extension results in a better representation of the measured RBE values. With this model it should be possible to better
Teamwork tools and activities within the hazard component of the Global Earthquake Model
Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.
2013-05-01
The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.
Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios
DEFF Research Database (Denmark)
Custer, Rocco; Nishijima, Kazuyoshi
In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature...... disaggregation model that considers the uncertainty in the disaggregation, taking basis in the scaled Dirichlet distribution. The proposed probabilistic disaggregation model is applied to a portfolio of residential buildings in the Canton Bern, Switzerland, subject to flood risk. Thereby, the model is verified...... are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is generally imperfect, uncertainty arises in disaggregation. This paper therefore proposes a probabilistic...
A conflict model for the international hazardous waste disposal dispute
International Nuclear Information System (INIS)
Hu Kaixian; Hipel, Keith W.; Fang, Liping
2009-01-01
A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.
A conflict model for the international hazardous waste disposal dispute.
Hu, Kaixian; Hipel, Keith W; Fang, Liping
2009-12-15
A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.
Multivariate Models for Prediction of Human Skin Sensitization Hazard
Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole
2016-01-01
One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324
Development and verification of printed circuit board toroidal transformer model
DEFF Research Database (Denmark)
Pejtersen, Jens; Mønster, Jakob Døllner; Knott, Arnold
2013-01-01
An analytical model of an air core printed circuit board embedded toroidal transformer configuration is presented. The transformer has been developed for galvanic isolation of very high frequency switch-mode dc-dc power converter applications. The theoretical model is developed and verified...... by comparing calculated parameters with 3D finite element simulations and experimental measurement results. The developed transformer model shows good agreement with the simulated and measured results. The model can be used to predict the parameters of printed circuit board toroidal transformer configurations...
Time-predictable model application in probabilistic seismic hazard analysis of faults in Taiwan
Directory of Open Access Journals (Sweden)
Yu-Wen Chang
2017-01-01
Full Text Available Given the probability distribution function relating to the recurrence interval and the occurrence time of the previous occurrence of a fault, a time-dependent model of a particular fault for seismic hazard assessment was developed that takes into account the active fault rupture cyclic characteristics during a particular lifetime up to the present time. The Gutenberg and Richter (1944 exponential frequency-magnitude relation uses to describe the earthquake recurrence rate for a regional source. It is a reference for developing a composite procedure modelled the occurrence rate for the large earthquake of a fault when the activity information is shortage. The time-dependent model was used to describe the fault characteristic behavior. The seismic hazards contribution from all sources, including both time-dependent and time-independent models, were then added together to obtain the annual total lifetime hazard curves. The effects of time-dependent and time-independent models of fault [e.g., Brownian passage time (BPT and Poisson, respectively] in hazard calculations are also discussed. The proposed fault model result shows that the seismic demands of near fault areas are lower than the current hazard estimation where the time-dependent model was used on those faults, particularly, the elapsed time since the last event of the faults (such as the Chelungpu fault are short.
COMPARISON of FUZZY-BASED MODELS in LANDSLIDE HAZARD MAPPING
Directory of Open Access Journals (Sweden)
N. Mijani
2017-09-01
Full Text Available Landslide is one of the main geomorphic processes which effects on the development of prospect in mountainous areas and causes disastrous accidents. Landslide is an event which has different uncertain criteria such as altitude, slope, aspect, land use, vegetation density, precipitation, distance from the river and distance from the road network. This research aims to compare and evaluate different fuzzy-based models including Fuzzy Analytic Hierarchy Process (Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR. The main contribution of this paper reveals to the comprehensive criteria causing landslide hazard considering their uncertainties and comparison of different fuzzy-based models. The quantify of evaluation process are calculated by Density Ratio (DR and Quality Sum (QS. The proposed methodology implemented in Sari, one of the city of Iran which has faced multiple landslide accidents in recent years due to the particular environmental conditions. The achieved results of accuracy assessment based on the quantifier strated that Fuzzy-AHP model has higher accuracy compared to other two models in landslide hazard zonation. Accuracy of zoning obtained from Fuzzy-AHP model is respectively 0.92 and 0.45 based on method Precision (P and QS indicators. Based on obtained landslide hazard maps, Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR respectively cover 13, 26 and 35 percent of the study area with a very high risk level. Based on these findings, fuzzy-AHP model has been selected as the most appropriate method of zoning landslide in the city of Sari and the Fuzzy-gamma method with a minor difference is in the second order.
TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION
Directory of Open Access Journals (Sweden)
Goran Klepac
2007-12-01
Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.
Analyzing Right-Censored Length-Biased Data with Additive Hazards Model
Institute of Scientific and Technical Information of China (English)
Mu ZHAO; Cun-jie LIN; Yong ZHOU
2017-01-01
Length-biased data are often encountered in observational studies,when the survival times are left-truncated and right-censored and the truncation times follow a uniform distribution.In this article,we propose to analyze such data with the additive hazards model,which specifies that the hazard function is the sum of an arbitrary baseline hazard function and a regression function of covariates.We develop estimating equation approaches to estimate the regression parameters.The resultant estimators are shown to be consistent and asymptotically normal.Some simulation studies and a real data example are used to evaluate the finite sample properties of the proposed estimators.
Multi-linear model of transformation of runoff in river-basins
International Nuclear Information System (INIS)
Szolgay, J.; Kubes, R.
2005-01-01
The component part of atmospheric precipitations-runoff model of Hron River is a individual model of transformation of flows in river network, too, which transforms runoff from separate partial catchment basin into terminal profile. This component of precipitations-runoff model can also be used as individual hydrologic transformation model of runoff waves in river-basin. Identification and calibration of this model is realised independently on precipitations-runoff model of Hron River, which is described in this chapter in detail.
Code Generation by Model Transformation : A Case Study in Transformation Modularity
Hemel, Z.; Kats, L.C.L.; Visser, E.
2008-01-01
Preprint of paper published in: Theory and Practice of Model Transformations (ICMT 2008), Lecture Notes in Computer Science 5063; doi:10.1007/978-3-540-69927-9_13 The realization of model-driven software development requires effective techniques for implementing code generators for domain-specific
A new General Lorentz Transformation model
International Nuclear Information System (INIS)
Novakovic, Branko; Novakovic, Alen; Novakovic, Dario
2000-01-01
A new general structure of Lorentz Transformations, in the form of General Lorentz Transformation model (GLT-model), has been derived. This structure includes both Lorentz-Einstein and Galilean Transformations as its particular (special) realizations. Since the free parameters of GLT-model have been identified in a gravitational field, GLT-model can be employed both in Special and General Relativity. Consequently, the possibilities of an unification of Einstein's Special and General Theories of Relativity, as well as an unification of electromagnetic and gravitational fields are opened. If GLT-model is correct then there exist four new observation phenomena (a length and time neutrality, and a length dilation and a time contraction). Besides, the well-known phenomena (a length contraction, and a time dilation) are also the constituents of GLT-model. It means that there is a symmetry in GLT-model, where the center of this symmetry is represented by a length and a time neutrality. A time and a length neutrality in a gravitational field can be realized if the velocity of a moving system is equal to the free fall velocity. A time and a length neutrality include an observation of a particle mass neutrality. A special consideration has been devoted to a correlation between GLT-model and a limitation on particle velocities in order to investigate the possibility of a travel time reduction. It is found out that an observation of a particle speed faster then c=299 792 458 m/s, is possible in a gravitational field, if certain conditions are fulfilled
Seismic hazard studies in Egypt
Directory of Open Access Journals (Sweden)
Abuo El-Ela A. Mohamed
2012-12-01
Full Text Available The study of earthquake activity and seismic hazard assessment of Egypt is very important due to the great and rapid spreading of large investments in national projects, especially the nuclear power plant that will be held in the northern part of Egypt. Although Egypt is characterized by low seismicity, it has experienced occurring of damaging earthquake effect through its history. The seismotectonic sitting of Egypt suggests that large earthquakes are possible particularly along the Gulf of Aqaba–Dead Sea transform, the Subduction zone along the Hellenic and Cyprean Arcs, and the Northern Red Sea triple junction point. In addition some inland significant sources at Aswan, Dahshour, and Cairo-Suez District should be considered. The seismic hazard for Egypt is calculated utilizing a probabilistic approach (for a grid of 0.5° × 0.5° within a logic-tree framework. Alternative seismogenic models and ground motion scaling relationships are selected to account for the epistemic uncertainty. Seismic hazard values on rock were calculated to create contour maps for four ground motion spectral periods and for different return periods. In addition, the uniform hazard spectra for rock sites for different 25 periods, and the probabilistic hazard curves for Cairo, and Alexandria cities are graphed. The peak ground acceleration (PGA values were found close to the Gulf of Aqaba and it was about 220 gal for 475 year return period. While the lowest (PGA values were detected in the western part of the western desert and it is less than 25 gal.
Modelling human interactions in the assessment of man-made hazards
International Nuclear Information System (INIS)
Nitoi, M.; Farcasiu, M.; Apostol, M.
2016-01-01
The human reliability assessment tools are not currently capable to model adequately the human ability to adapt, to innovate and to manage under extreme situations. The paper presents the results obtained by ICN PSA team in the frame of FP7 Advanced Safety Assessment Methodologies: extended PSA (ASAMPSA_E) project regarding the investigation of conducting HRA in human-made hazards. The paper proposes to use a 4-steps methodology for the assessment of human interactions in the external events (Definition and modelling of human interactions; Quantification of human failure events; Recovery analysis; Review). The most relevant factors with respect to HRA for man-made hazards (response execution complexity; existence of procedures with respect to the scenario in question; time available for action; timing of cues; accessibility of equipment; harsh environmental conditions) are presented and discussed thoroughly. The challenges identified in relation to man-made hazards HRA are highlighted. (authors)
Theory and Model for Martensitic Transformations
DEFF Research Database (Denmark)
Lindgård, Per-Anker; Mouritsen, Ole G.
1986-01-01
Martensitic transformations are shown to be driven by the interplay between two fluctuating strain components. No soft mode is needed, but a central peak occurs representing the dynamics of strain clusters. A two-dimensional magnetic-analog model with the martensitic-transition symmetry is constr......Martensitic transformations are shown to be driven by the interplay between two fluctuating strain components. No soft mode is needed, but a central peak occurs representing the dynamics of strain clusters. A two-dimensional magnetic-analog model with the martensitic-transition symmetry...... is constructed and analyzed by computer simulation and by a theory which accounts for correlation effects. Dramatic precursor effects at the first-order transition are demonstrated. The model is also of relevance for surface reconstruction transitions....
Unifying approach for model transformations in the MOF metamodeling architecture
Ivanov, Ivan; van den Berg, Klaas
2004-01-01
In the Meta Object Facility (MOF) metamodeling architecture a number of model transformation scenarios can be identified. It could be expected that a metamodeling architecture will be accompanied by a transformation technology supporting the model transformation scenarios in a uniform way. Despite
Building adaptable and reusable XML applications with model transformations
Ivanov, Ivan; van den Berg, Klaas
2005-01-01
We present an approach in which the semantics of an XML language is defined by means of a transformation from an XML document model (an XML schema) to an application specific model. The application specific model implements the intended behavior of documents written in the language. A transformation
Computer models used to support cleanup decision-making at hazardous and radioactive waste sites
International Nuclear Information System (INIS)
Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.
1992-07-01
Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study
Computer models used to support cleanup decision-making at hazardous and radioactive waste sites
Energy Technology Data Exchange (ETDEWEB)
Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.
1992-07-01
Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.
Transformative leadership: an ethical stewardship model for healthcare.
Caldwell, Cam; Voelker, Carolyn; Dixon, Rolf D; LeJeune, Adena
2008-01-01
The need for effective leadership is a compelling priority for those who would choose to govern in public, private, and nonprofit organizations, and applies as much to the healthcare profession as it does to other sectors of the economy (Moody, Horton-Deutsch, & Pesut, 2007). Transformative Leadership, an approach to leadership and governance that incorporates the best characteristics of six other highly respected leadership models, is an integrative theory of ethical stewardship that can help healthcare professionals to more effectively achieve organizational efficiencies, build stakeholder commitment and trust, and create valuable synergies to transform and enrich today's healthcare systems (cf. Caldwell, LeJeune, & Dixon, 2007). The purpose of this article is to introduce the concept of Transformative Leadership and to explain how this model applies within a healthcare context. We define Transformative Leadership and identify its relationship to Transformational, Charismatic, Level 5, Principle-Centered, Servant, and Covenantal Leadership--providing examples of each of these elements of Transformative Leadership within a healthcare leadership context. We conclude by identifying contributions of this article to the healthcare leadership literature.
Flood hazard mapping of Palembang City by using 2D model
Farid, Mohammad; Marlina, Ayu; Kusuma, Muhammad Syahril Badri
2017-11-01
Palembang as the capital city of South Sumatera Province is one of the metropolitan cities in Indonesia that flooded almost every year. Flood in the city is highly related to Musi River Basin. Based on Indonesia National Agency of Disaster Management (BNPB), the level of flood hazard is high. Many natural factors caused flood in the city such as high intensity of rainfall, inadequate drainage capacity, and also backwater flow due to spring tide. Furthermore, anthropogenic factors such as population increase, land cover/use change, and garbage problem make flood problem become worse. The objective of this study is to develop flood hazard map of Palembang City by using two dimensional model. HEC-RAS 5.0 is used as modelling tool which is verified with field observation data. There are 21 sub catchments of Musi River Basin in the flood simulation. The level of flood hazard refers to Head Regulation of BNPB number 2 in 2012 regarding general guideline of disaster risk assessment. The result for 25 year return per iod of flood shows that with 112.47 km2 area of inundation, 14 sub catchments are categorized in high hazard level. It is expected that the hazard map can be used for risk assessment.
Traffic Incident Clearance Time and Arrival Time Prediction Based on Hazard Models
Directory of Open Access Journals (Sweden)
Yang beibei Ji
2014-01-01
Full Text Available Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an effective input for travel time prediction. In this paper, the hazard based prediction models are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.
Feasibility of EPC to BPEL Model Transformations based on Ontology and Patterns
Meertens, Lucas O.; Iacob, Maria Eugenia; Eckartz, Silja M.; Rinderle-Ma, Stefanie; Sadiq, Shazia; Leymann, Frank
2010-01-01
Model-Driven Engineering holds the promise of transforming business models into code automatically. This requires the concept of model transformation. In this paper, we assess the feasibility of model transformations from Event-driven Process Chain models to Business Process Execution Language
Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios
Custer, Rocco; Nishijima, Kazuyoshi
2012-01-01
In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is ...
International Nuclear Information System (INIS)
Li, S.; Zhu, R.; Karaman, I.; Arróyave, R.
2013-01-01
In this work, we modify existing models to simulate the kinetics of bainitic transformation during the bainitic isothermal transformation (BIT) stage of a typical two-stage heat treatment – BIT is preceded by an intercritical annealing treatment – for TRIP steels. This effort is motivated by experiments performed in a conventional TRIP steel alloy (Fe–0.32C–1.42Mn–1.56Si) that suggest that thermodynamics alone are not sufficient to predict the amount of retained austenite after BIT. The model implemented in this work considers the non-homogeneous distribution of carbon – resulting from finite carbon diffusion rates – within the retained austenite during bainitic transformation. This non-homogeneous distribution is responsible for average austenite carbon enrichments beyond the so-called T 0 line, the temperature at which the chemical driving force for the bainitic transformation is exhausted. In order to attain good agreement with experiments, the existence of carbon-rich austenite films adjacent to bainitic ferrite plates is posited. The presence of this austenite film is motivated by earlier experimental work published by other groups in the past decade. The model is compared with experimental results and good qualitative agreement is found
Hazard function theory for nonstationary natural hazards
Read, L.; Vogel, R. M.
2015-12-01
Studies from the natural hazards literature indicate that many natural processes, including wind speeds, landslides, wildfires, precipitation, streamflow and earthquakes, show evidence of nonstationary behavior such as trends in magnitudes through time. Traditional probabilistic analysis of natural hazards based on partial duration series (PDS) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance is constant through time. Given evidence of trends and the consequent expected growth in devastating impacts from natural hazards across the world, new methods are needed to characterize their probabilistic behavior. The field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (x) with its failure time series (t), enabling computation of corresponding average return periods and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose PDS magnitudes are assumed to follow the widely applied Poisson-GP model. We derive a 2-parameter Generalized Pareto hazard model and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series x, with corresponding failure time series t, should have application to a wide class of natural hazards.
The comparison study among several data transformations in autoregressive modeling
Setiyowati, Susi; Waluyo, Ramdhani Try
2015-12-01
In finance, the adjusted close of stocks are used to observe the performance of a company. The extreme prices, which may increase or decrease drastically, are often become particular concerned since it can impact to bankruptcy. As preventing action, the investors have to observe the future (forecasting) stock prices comprehensively. For that purpose, time series analysis could be one of statistical methods that can be implemented, for both stationary and non-stationary processes. Since the variability process of stocks prices tend to large and also most of time the extreme values are always exist, then it is necessary to do data transformation so that the time series models, i.e. autoregressive model, could be applied appropriately. One of popular data transformation in finance is return model, in addition to ratio of logarithm and some others Tukey ladder transformation. In this paper these transformations are applied to AR stationary models and non-stationary ARCH and GARCH models through some simulations with varying parameters. As results, this work present the suggestion table that shows transformations behavior for some condition of parameters and models. It is confirmed that the better transformation is obtained, depends on type of data distributions. In other hands, the parameter conditions term give significant influence either.
International Nuclear Information System (INIS)
Jiang, Y.-H.; Liu, F.; Song, S.-J.
2012-01-01
An extended analytical model is derived for non-isothermal solid-state phase transformation assuming interface-controlled growth mode, e.g. polymorphic or allotropic transformation. In the modeling, incorporation of thermodynamic factor into kinetics of nucleation and growth is performed, so that the model can be used to describe the transformation occurring either near or far from the equilibrium state. Furthermore, the effect of the initial transformation temperature is included through a special treatment for the “temperature integral”, so that the model can be used to depict the transformation during either continuous heating or continuous cooling. Numerical calculations demonstrate that the extended analytical model is accurate enough for practical use. On this basis, a general rate equation for non-isothermal (isochronal heating and cooling) transformation is derived. Applying the present model, the overall kinetic behavior of γ/α transformation in binary substitutional Fe-based alloys (e.g. Fe–Mn and Fe–Cu) upon cooling, measured by dilatometry, is described successfully. Compared with previous work, where a site saturation assumption is generally made, the prevalence of continuous nucleation deduced using the present model prediction seems more reasonable.
Modelling multi-hazard hurricane damages on an urbanized coast with a Bayesian Network approach
van Verseveld, H.C.W.; Van Dongeren, A. R.; Plant, Nathaniel G.; Jäger, W.S.; den Heijer, C.
2015-01-01
Hurricane flood impacts to residential buildings in coastal zones are caused by a number of hazards, such as inundation, overflow currents, erosion, and wave attack. However, traditional hurricane damage models typically make use of stage-damage functions, where the stage is related to flooding depth only. Moreover, these models are deterministic and do not consider the large amount of uncertainty associated with both the processes themselves and with the predictions. This uncertainty becomes increasingly important when multiple hazards (flooding, wave attack, erosion, etc.) are considered simultaneously. This paper focusses on establishing relationships between observed damage and multiple hazard indicators in order to make better probabilistic predictions. The concept consists of (1) determining Local Hazard Indicators (LHIs) from a hindcasted storm with use of a nearshore morphodynamic model, XBeach, and (2) coupling these LHIs and building characteristics to the observed damages. We chose a Bayesian Network approach in order to make this coupling and used the LHIs ‘Inundation depth’, ‘Flow velocity’, ‘Wave attack’, and ‘Scour depth’ to represent flooding, current, wave impacts, and erosion related hazards.The coupled hazard model was tested against four thousand damage observations from a case site at the Rockaway Peninsula, NY, that was impacted by Hurricane Sandy in late October, 2012. The model was able to accurately distinguish ‘Minor damage’ from all other outcomes 95% of the time and could distinguish areas that were affected by the storm, but not severely damaged, 68% of the time. For the most heavily damaged buildings (‘Major Damage’ and ‘Destroyed’), projections of the expected damage underestimated the observed damage. The model demonstrated that including multiple hazards doubled the prediction skill, with Log-Likelihood Ratio test (a measure of improved accuracy and reduction in uncertainty) scores between 0.02 and 0
Ground motion models used in the 2014 U.S. National Seismic Hazard Maps
Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.
2015-01-01
The National Seismic Hazard Maps (NSHMs) are an important component of seismic design regulations in the United States. This paper compares hazard using the new suite of ground motion models (GMMs) relative to hazard using the suite of GMMs applied in the previous version of the maps. The new source characterization models are used for both cases. A previous paper (Rezaeian et al. 2014) discussed the five NGA-West2 GMMs used for shallow crustal earthquakes in the Western United States (WUS), which are also summarized here. Our focus in this paper is on GMMs for earthquakes in stable continental regions in the Central and Eastern United States (CEUS), as well as subduction interface and deep intraslab earthquakes. We consider building code hazard levels for peak ground acceleration (PGA), 0.2-s, and 1.0-s spectral accelerations (SAs) on uniform firm-rock site conditions. The GMM modifications in the updated version of the maps created changes in hazard within 5% to 20% in WUS; decreases within 5% to 20% in CEUS; changes within 5% to 15% for subduction interface earthquakes; and changes involving decreases of up to 50% and increases of up to 30% for deep intraslab earthquakes for most U.S. sites. These modifications were combined with changes resulting from modifications in the source characterization models to obtain the new hazard maps.
Fuchs, Sven; Thaler, Thomas; Bonnefond, Mathieu; Clarke, Darren; Driessen, Peter; Hegger, Dries; Gatien-Tournat, Amandine; Gralepois, Mathilde; Fournier, Marie; Mees, Heleen; Murphy, Conor; Servain-Courant, Sylvie
2015-04-01
Facing the challenges of climate change, this project aims to analyse and to evaluate the multiple use of flood alleviation schemes with respect to social transformation in communities exposed to flood hazards in Europe. The overall goals are: (1) the identification of indicators and parameters necessary for strategies to increase societal resilience, (2) an analysis of the institutional settings needed for societal transformation, and (3) perspectives of changing divisions of responsibilities between public and private actors necessary to arrive at more resilient societies. This proposal assesses societal transformations from the perspective of changing divisions of responsibilities between public and private actors necessary to arrive at more resilient societies. Yet each risk mitigation measure is built on a narrative of exchanges and relations between people and therefore may condition the outputs. As such, governance is done by people interacting and defining risk mitigation measures as well as climate change adaptation are therefore simultaneously both outcomes of, and productive to, public and private responsibilities. Building off current knowledge this project will focus on different dimensions of adaptation and mitigation strategies based on social, economic and institutional incentives and settings, centring on the linkages between these different dimensions and complementing existing flood risk governance arrangements. The policy dimension of adaptation, predominantly decisions on the societal admissible level of vulnerability and risk, will be evaluated by a human-environment interaction approach using multiple methods and the assessment of social capacities of stakeholders across scales. As such, the challenges of adaptation to flood risk will be tackled by converting scientific frameworks into practical assessment and policy advice. In addressing the relationship between these dimensions of adaptation on different temporal and spatial scales, this
How to transform local energy systems towards bioenergy? Three strategy models for transformation
International Nuclear Information System (INIS)
Martensson, Kjell; Westerberg, Karin
2007-01-01
During the last decades, the actors within the energy sector in Sweden-as well as in many other countries-have faced increasing demands to transform the energy system towards ecological sustainability. In Sweden these demands have led to numerous policies and economic incentives promoting the use of renewables (which in the Swedish discourse often also includes a connotation of 'indigenous energy sources'), and especially the promotion of bioenergy. To be successful, however, these policies and economic incentives need to be interpreted and adapted to different local contexts and translated into actual transformation processes. In Sweden the municipal authorities have played an important role as interpreters of such institutional frameworks and implementers of local transformation processes. In this article, we re-construct three transformation processes implemented by local municipal authorities, chiselling out the different strategy models developed through them. We argue that such re-constructions help to make visible the different and complex interactions between national institutional frameworks and local contexts as well as interactions within such local contexts. We hope that the strategy models presented can contribute to the understanding of the different kinds of local actions that can foster a further implementation of bioenergy into the energy system
Comparison of BrainTool to other UML modeling and model transformation tools
Nikiforova, Oksana; Gusarovs, Konstantins
2017-07-01
In the last 30 years there were numerous model generated software systems offered targeting problems with the development productivity and the resulting software quality. CASE tools developed due today's date are being advertised as having "complete code-generation capabilities". Nowadays the Object Management Group (OMG) is calling similar arguments in regards to the Unified Modeling Language (UML) models at different levels of abstraction. It is being said that software development automation using CASE tools enables significant level of automation. Actual today's CASE tools are usually offering a combination of several features starting with a model editor and a model repository for a traditional ones and ending with code generator (that could be using a scripting or domain-specific (DSL) language), transformation tool to produce the new artifacts from the manually created and transformation definition editor to define new transformations for the most advanced ones. Present paper contains the results of CASE tool (mainly UML editors) comparison against the level of the automation they are offering.
Modelling of stresses generated in steels by phase transformations
International Nuclear Information System (INIS)
Dudek, K.; Glowacki, M.; Pietrzyk, M.
1999-01-01
Numerical model describing stresses arising during phase transformations in steels products is presented. The full model consists of three components. The first component uses finite element solution of Fourier equation for an evaluation of the temperature field inside the sample. The second component predicts kinetics of phase transformation occurring during cooling of steel products. Coupling of these two components allows prediction of structure and properties of final products at room temperature. The third component uses elastic-plastic finite element model for prediction of stresses caused by non-uniform temperatures and by changes of volume during transformations. Typical results of simulations performed for cooling of rails after hot rolling are presented. (author)
Time-aggregation effects on the baseline of continuous-time and discrete-time hazard models
ter Hofstede, F.; Wedel, M.
In this study we reinvestigate the effect of time-aggregation for discrete- and continuous-time hazard models. We reanalyze the results of a previous Monte Carlo study by ter Hofstede and Wedel (1998), in which the effects of time-aggregation on the parameter estimates of hazard models were
Models for thermal and mechanical monitoring of power transformers
Energy Technology Data Exchange (ETDEWEB)
Vilaithong, Rummiya
2011-07-01
At present, for economic reasons, there is an increasing emphasis on keeping transformers in service for longer than in the past. A condition-based maintenance using an online monitoring and diagnostic system is one option to ensure reliability of the transformer operation. The key parameters for effectively monitoring equipment can be selected by failure statistics and estimated failure consequences. In this work, two key aspects of transformer condition monitoring are addressed in depth: thermal behaviour and behaviour of on-load tap changers. In the first part of the work, transformer thermal behaviour is studied, focussing on top-oil temperatures. Through online comparison of a measured value of the top-oil temperature and its calculated value, some rapidly developing failures in power transformers such as malfunction of the cooling unit may be detected. Predictions of top-oil temperature can be obtained by means of a mathematical model. Long-term investigations on some dynamic top-oil temperature models are presented for three different types of transformer units. The last-state top-oil temperature, load current, ambient temperature and the operating state of pumps and fans are applied as inputs of the top-oil temperature models. In the fundamental physical models presented, some constant parameters are required and can be estimated using a least-squares optimization technique. Multilayer Feed-forward and Recurrent neural network models are also proposed and investigated. The neural network models are trained with three different Backpropagation training algorithms: Levenberg-Marquardt, Scaled Conjugate Gradient and Automated Bayesian Regularization. The effect of varying operating conditions of the cooling units and the non-steady-state behaviour of loading conditions, as well as ambient temperature are noted. Results show sophisticated temperature prediction is possible using the neural network models that is generally more accurate than with the physical
Paprotny, D.; Morales Napoles, O.; Jonkman, S.N.
2017-01-01
Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood
FOURIER SERIES MODELS THROUGH TRANSFORMATION
African Journals Online (AJOL)
DEPT
monthly temperature data (1996 – 2005) collected from the National Root ... KEY WORDS: Fourier series, square transformation, multiplicative model, ... fluctuations or movements are often periodic(Ekpeyong,2005). .... significant trend or not, if the trend is not significant, the grand mean may be used as an estimate of trend.
International Nuclear Information System (INIS)
Kraus, N.N.; Slovic, P.
1988-01-01
Previous studies of risk perception have typically focused on the mean judgments of a group of people regarding the riskiness (or safety) of a diverse set of hazardous activities, substances, and technologies. This paper reports the results of two studies that take a different path. Study 1 investigated whether models within a single technological domain were similar to previous models based on group means and diverse hazards. Study 2 created a group taxonomy of perceived risk for only one technological domain, railroads, and examined whether the structure of that taxonomy corresponded with taxonomies derived from prior studies of diverse hazards. Results from Study 1 indicated that the importance of various risk characteristics in determining perceived risk differed across individuals and across hazards, but not so much as to invalidate the results of earlier studies based on group means and diverse hazards. In Study 2, the detailed analysis of railroad hazards produced a structure that had both important similarities to, and dissimilarities from, the structure obtained in prior research with diverse hazard domains. The data also indicated that railroad hazards are really quite diverse, with some approaching nuclear reactors in their perceived seriousness. These results suggest that information about the diversity of perceptions within a single domain of hazards could provide valuable input to risk-management decisions
Directory of Open Access Journals (Sweden)
Mihaela Poienar
2014-09-01
Full Text Available The clock hour figure mathematical model of a threephase transformer can be expressed, in the most plain form, through a 3X3 square matrix, called code matrix. The lines position reflect the modification in the high voltage windings terminal and the columns position reflect the modification in the low voltage winding terminal. The main changes on the transformer winding terminal are: the circular permutation of connection between windings; terminal supply reversal; reverse direction for the phase winding wrapping; reversal the beginning with the end for a phase winding; the connection conversion from N in Z between phase winding or inverse. The analytical form of these changes actually affect the configuration of the mathematical model expressed through a transformations diagram proposed and analyzed in two ways: bipolar version and unipolar version (fanwise. In the end of the paper are presented about the practical exploitation of the transformations diagram.
An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest.
Suh, Jangwon; Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon
2017-11-27
In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further.
An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest
Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon
2017-01-01
In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further. PMID:29186922
An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest
Directory of Open Access Journals (Sweden)
Jangwon Suh
2017-11-01
Full Text Available In this study, current geographic information system (GIS-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further.
Modeling of magnetization reversal processes in magnetic circuits of measuring transformers
Lebedev, Vladimir; Makarov, Arkadiy; Yablokov, Andrey; Filatova, Galina
2015-01-01
The article describes methods for modeling transient regimes in current and voltage transformers. In most studies measuring transformers are modeled in a stationary mode to determine their metrological characteristics. However, for safe uninterrupted operation of transformers and electrical networks it is necessary to carry out their research in dynamic mode. In particular, the study of the transformers stability to the ferroresonant phenomena occurring during switching o...
Modeling nonhomogeneous Markov processes via time transformation.
Hubbard, R A; Inoue, L Y T; Fann, J R
2008-09-01
Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.
The Unfolding of Value Sources During Online Business Model Transformation
Directory of Open Access Journals (Sweden)
Nadja Hoßbach
2016-12-01
Full Text Available Purpose: In the magazine publishing industry, viable online business models are still rare to absent. To prepare for the ‘digital future’ and safeguard their long-term survival, many publishers are currently in the process of transforming their online business model. Against this backdrop, this study aims to develop a deeper understanding of (1 how the different building blocks of an online business model are transformed over time and (2 how sources of value creation unfold during this transformation process. Methodology: To answer our research question, we conducted a longitudinal case study with a leading German business magazine publisher (called BIZ. Data was triangulated from multiple sources including interviews, internal documents, and direct observations. Findings: Based on our case study, we nd that BIZ used the transformation process to differentiate its online business model from its traditional print business model along several dimensions, and that BIZ’s online business model changed from an efficiency- to a complementarity- to a novelty-based model during this process. Research implications: Our findings suggest that different business model transformation phases relate to different value sources, questioning the appropriateness of value source-based approaches for classifying business models. Practical implications: The results of our case study highlight the need for online-offline business model differentiation and point to the important distinction between service and product differentiation. Originality: Our study contributes to the business model literature by applying a dynamic and holistic perspective on the link between online business model changes and unfolding value sources.
[Hazard evaluation modeling of particulate matters emitted by coal-fired boilers and case analysis].
Shi, Yan-Ting; Du, Qian; Gao, Jian-Min; Bian, Xin; Wang, Zhi-Pu; Dong, He-Ming; Han, Qiang; Cao, Yang
2014-02-01
In order to evaluate the hazard of PM2.5 emitted by various boilers, in this paper, segmentation of particulate matters with sizes of below 2. 5 microm was performed based on their formation mechanisms and hazard level to human beings and environment. Meanwhile, taking into account the mass concentration, number concentration, enrichment factor of Hg, and content of Hg element in different coal ashes, a comprehensive model aimed at evaluating hazard of PM2.5 emitted by coal-fired boilers was established in this paper. Finally, through utilizing filed experimental data of previous literatures, a case analysis of the evaluation model was conducted, and the concept of hazard reduction coefficient was proposed, which can be used to evaluate the performance of dust removers.
Building a risk-targeted regional seismic hazard model for South-East Asia
Woessner, J.; Nyst, M.; Seyhan, E.
2015-12-01
The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.
Bringing Partnership Home: A Model of Family Transformation
Directory of Open Access Journals (Sweden)
Julie de Azevedo Hanks
2015-07-01
Full Text Available Eisler’s cultural transformation theory suggests that the global crises we face can be addressed only through movement to a partnership model of social organization. Drawing on cultural transformation theory and systems theory, a partnership model of family organization (PMFO is outlined as a practical framework to guide families toward partnership relations. Eight components of PMFO are presented and expanded on as a path toward furthering familial and societal transformation. The eight tenets of a PMFO are: 1 cooperative adult leadership, 2 connecting orientation, 3 caretaking emphasis, 4 collaborative roles and rules, 5 celebration of unique contributions, 6 compassionate communication, 7 conscious language use, and 8 collection and creation of partnership stories. Finally, specific strategies of application of the PMFO will be discussed.
A Mathematical Model for the Industrial Hazardous Waste Location-Routing Problem
Directory of Open Access Journals (Sweden)
Omid Boyer
2013-01-01
Full Text Available Technology progress is a cause of industrial hazardous wastes increasing in the whole world . Management of hazardous waste is a significant issue due to the imposed risk on environment and human life. This risk can be a result of location of undesirable facilities and also routing hazardous waste. In this paper a biobjective mixed integer programing model for location-routing industrial hazardous waste with two objectives is developed. First objective is total cost minimization including transportation cost, operation cost, initial investment cost, and cost saving from selling recycled waste. Second objective is minimization of transportation risk. Risk of population exposure within bandwidth along route is used to measure transportation risk. This model can help decision makers to locate treatment, recycling, and disposal centers simultaneously and also to route waste between these facilities considering risk and cost criteria. The results of the solved problem prove conflict between two objectives. Hence, it is possible to decrease the cost value by marginally increasing the transportation risk value and vice versa. A weighted sum method is utilized to combine two objectives function into one objective function. To solve the problem GAMS software with CPLEX solver is used. The problem is applied in Markazi province in Iran.
Mathematical model of three winding auto transformer
International Nuclear Information System (INIS)
Volcko, V.; Eleschova, Z.; Belan, A.; Janiga, P.
2012-01-01
This article deals with the design of mathematical model of three-winding auto transformer for steady state analyses. The article is focused on model simplicity for the purposes of the use in complex transmission systems and authenticity of the model taking into account different types of step-voltage regulator. (Authors)
Transformation Strategies between Block-Oriented and Graph-Oriented Process Modelling Languages
DEFF Research Database (Denmark)
Mendling, Jan; Lassen, Kristian Bisgaard; Zdun, Uwe
2006-01-01
Much recent research work discusses the transformation between different process modelling languages. This work, however, is mainly focussed on specific process modelling languages, and thus the general reusability of the applied transformation concepts is rather limited. In this paper, we aim...... to abstract from concrete transformation strategies by distinguishing two major paradigms for representing control flow in process modelling languages: block-oriented languages (such as BPEL and BPML) and graph-oriented languages (such as EPCs and YAWL). The contribution of this paper are generic strategies...... for transforming from block-oriented process languages to graph-oriented languages, and vice versa....
International Nuclear Information System (INIS)
McKone, T.E.
1994-01-01
Risk assessment is a quantitative evaluation of information on potential health hazards of environmental contaminants and the extent of human exposure to these contaminants. As applied to toxic chemical emissions to air, risk assessment involves four interrelated steps. These are (1) determination of source concentrations or emission characteristics, (2) exposure assessment, (3) toxicity assessment, and (4) risk characterization. These steps can be carried out with assistance from analytical models in order to estimate the potential risk associated with existing and future releases. CAirTOX has been developed as a spreadsheet model to assist in making these types of calculations. CAirTOX follows an approach that has been incorporated into the CalTOX model, which was developed for the California Department of Toxic Substances Control, With CAirTOX, we can address how contaminants released to an air basin can lead to contamination of soil, food, surface water, and sediments. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify uncertainty in multimedia, multiple-pathway exposure assessments. The capacity to explicitly address uncertainty has been incorporated into the model in two ways. First, the spreadsheet form of the model makes it compatible with Monte-Carlo add-on programs that are available for uncertainty analysis. Second, all model inputs are specified in terms of an arithmetic mean and coefficient of variation so that uncertainty analyses can be carried out
The right tool for the right job : assessing model transformation quality
Amstel, van M.F.
2010-01-01
Model-Driven Engineering (MDE) is a software engineering discipline in which models play a central role. One of the key concepts of MDE is model transformations. Because of the crucial role of model transformations in MDE, they have to be treated in a similar way as traditional software artifacts.
Rule-based modularization in model transformation languages illustrated with ATL
Ivanov, Ivan; van den Berg, Klaas; Jouault, Frédéric
2007-01-01
This paper studies ways for modularizing transformation definitions in current rule-based model transformation languages. Two scenarios are shown in which the modular units are identified on the basis of relations between source and target metamodels and on the base of generic transformation
A COMPREHENSIVE MODEL FOR THE POWER TRANSFORMER DIGITAL DIFFERENTIAL PROTECTION FUNCTIONING RESEARCH
Directory of Open Access Journals (Sweden)
Yu. V. Rumiantsev
2016-01-01
Full Text Available This article presents a comprehensive model for the two-winding power transformer digital differential protection functioning research. Considered comprehensive model is developed in MatLab-Simulink dynamic simulation environment with the help of SimPowerSystems component library and includes the following elements: power supply, three-phase power transformer, wye-connected current transformers and two-winding power transformer digital differential protection model. Each element of the presented model is described in the degree sufficient for its implementation in the dynamic simulation environment. Particular attention is paid to the digital signal processing principles and to the ways of differential and restraining currents forming of the considered comprehensive model main element – power transformer digital differential protection. With the help of this model the power transformer digital differential protection functioning was researched during internal and external faults: internal short-circuit, external short-circuit with and without current transformers saturation on the power transformer low-voltage side. Each experiment is illustrated with differential and restraining currents waveforms of the digital differential protection under research. Particular attention was paid to the digital protection functioning analysis during power transformer abnormal modes: overexcitation and inrush current condition. Typical current waveforms during these modes were showed and their harmonic content was investigated. The causes of these modes were analyzed in details. Digital differential protection blocking algorithms based on the harmonic content were considered. Drawbacks of theses algorithms were observed and the need of their further technical improvement was marked.
Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.
2017-12-01
Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in
The additive hazards model with high-dimensional regressors
DEFF Research Database (Denmark)
Martinussen, Torben; Scheike, Thomas
2009-01-01
This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...
Computer simulation of the martensite transformation in a model two-dimensional body
International Nuclear Information System (INIS)
Chen, S.; Khachaturyan, A.G.; Morris, J.W. Jr.
1979-05-01
An analytical model of a martensitic transformation in an idealized body is constructed and used to carry out a computer simulation of the transformation in a pseudo-two-dimensional crystal. The reaction is assumed to proceed through the sequential transformation of elementary volumes (elementary martensitic particles, EMP) via the Bain strain. The elastic interaction between these volumes is computed and the transformation path chosen so as to minimize the total free energy. The model transformation shows interesting qualitative correspondencies with the known features of martensitic transformations in typical solids
Computer simulation of the martensite transformation in a model two-dimensional body
International Nuclear Information System (INIS)
Chen, S.; Khachaturyan, A.G.; Morris, J.W. Jr.
1979-06-01
An analytical model of a martensitic transformation in an idealized body is constructed and used to carry out a computer simulation of the transformation in a pseudo-two-dimensional crystal. The reaction is assumed to proceed through the sequential transformation of elementary volumes (elementary martensitic particles, EMP) via the Bain strain. The elastic interaction between these volumes is computed and the transformation path chosen so as to minimize the total free energy. The model transformation shows interesting qualitative correspondencies with the known features of martensitic transformations in typical solids
Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco
2016-04-01
The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including
TP-model transformation-based-control design frameworks
Baranyi, Péter
2016-01-01
This book covers new aspects and frameworks of control, design, and optimization based on the TP model transformation and its various extensions. The author outlines the three main steps of polytopic and LMI based control design: 1) development of the qLPV state-space model, 2) generation of the polytopic model; and 3) application of LMI to derive controller and observer. He goes on to describe why literature has extensively studied LMI design, but has not focused much on the second step, in part because the generation and manipulation of the polytopic form was not tractable in many cases. The author then shows how the TP model transformation facilitates this second step and hence reveals new directions, leading to powerful design procedures and the formulation of new questions. The chapters of this book, and the complex dynamical control tasks which they cover, are organized so as to present and analyze the beneficial aspect of the family of approaches (control, design, and optimization). Additionally, the b...
Identification and delineation of areas flood hazard using high accuracy of DEM data
Riadi, B.; Barus, B.; Widiatmaka; Yanuar, M. J. P.; Pramudya, B.
2018-05-01
Flood incidents that often occur in Karawang regency need to be mitigated. These expectations exist on technologies that can predict, anticipate and reduce disaster risks. Flood modeling techniques using Digital Elevation Model (DEM) data can be applied in mitigation activities. High accuracy DEM data used in modeling, will result in better flooding flood models. The result of high accuracy DEM data processing will yield information about surface morphology which can be used to identify indication of flood hazard area. The purpose of this study was to identify and describe flood hazard areas by identifying wetland areas using DEM data and Landsat-8 images. TerraSAR-X high-resolution data is used to detect wetlands from landscapes, while land cover is identified by Landsat image data. The Topography Wetness Index (TWI) method is used to detect and identify wetland areas with basic DEM data, while for land cover analysis using Tasseled Cap Transformation (TCT) method. The result of TWI modeling yields information about potential land of flood. Overlay TWI map with land cover map that produces information that in Karawang regency the most vulnerable areas occur flooding in rice fields. The spatial accuracy of the flood hazard area in this study was 87%.
International Nuclear Information System (INIS)
Alzbutas, R.; Ostapchuk, S.; Borysiewicz, M.; Decker, K.; Kumar, Manorma; Haeggstroem, A.; Nitoi, M.; Groudev, P.; Parey, S.; Potempski, S.; Raimond, E.; Siklossy, T.
2016-01-01
The goal of this report is to provide guidance on practices to model Extreme Weather hazards and implement them in extended level 1 PSA. This report is a joint deliverable of work package 21 (WP21) and work package 22 (WP22). The general objective of WP21 is to provide guidance on all of the individual hazards selected at the End Users Workshop. This guidance is focusing on extreme weather hazards, namely: extreme wind, extreme temperature and snow pack. Other hazards, however, are considered in cases where they are correlated/ associated with the hazard under discussion. Guidance developed refers to existing guidance whenever possible. As it was recommended by end users this guidance covers questions of developing integrated and/or separated extreme weathers PSA models. (authors)
Geostatistical analyses and hazard assessment on soil lead in Silvermines area, Ireland
International Nuclear Information System (INIS)
McGrath, David; Zhang Chaosheng; Carton, Owen T.
2004-01-01
Spatial distribution and hazard assessment of soil lead in the mining site of Silvermines, Ireland, were investigated using statistics, geostatistics and geographic information system (GIS) techniques. Positively skewed distribution and possible outlying values of Pb and other heavy metals were observed. Box-Cox transformation was applied in order to achieve normality in the data set and to reduce the effect of outliers. Geostatistical analyses were carried out, including calculation of experimental variograms and model fitting. The ordinary point kriging estimates of Pb concentration were mapped. Kriging standard deviations were regarded as the standard deviations of the interpolated pixel values, and a second map was produced, that quantified the probability of Pb concentration higher than a threshold value of 1000 mg/kg. These maps provide valuable information for hazard assessment and for decision support. - A probability map was produced that was useful for hazard assessment and decision support
Geostatistical analyses and hazard assessment on soil lead in Silvermines area, Ireland
Energy Technology Data Exchange (ETDEWEB)
McGrath, David; Zhang Chaosheng; Carton, Owen T
2004-01-01
Spatial distribution and hazard assessment of soil lead in the mining site of Silvermines, Ireland, were investigated using statistics, geostatistics and geographic information system (GIS) techniques. Positively skewed distribution and possible outlying values of Pb and other heavy metals were observed. Box-Cox transformation was applied in order to achieve normality in the data set and to reduce the effect of outliers. Geostatistical analyses were carried out, including calculation of experimental variograms and model fitting. The ordinary point kriging estimates of Pb concentration were mapped. Kriging standard deviations were regarded as the standard deviations of the interpolated pixel values, and a second map was produced, that quantified the probability of Pb concentration higher than a threshold value of 1000 mg/kg. These maps provide valuable information for hazard assessment and for decision support. - A probability map was produced that was useful for hazard assessment and decision support.
Diffuse-interface model for rapid phase transformations in nonequilibrium systems.
Galenko, Peter; Jou, David
2005-04-01
A thermodynamic approach to rapid phase transformations within a diffuse interface in a binary system is developed. Assuming an extended set of independent thermodynamic variables formed by the union of the classic set of slow variables and the space of fast variables, we introduce finiteness of the heat and solute diffusive propagation at the finite speed of the interface advancing. To describe transformations within the diffuse interface, we use the phase-field model which allows us to follow steep but smooth changes of phase within the width of the diffuse interface. Governing equations of the phase-field model are derived for the hyperbolic model, a model with memory, and a model of nonlinear evolution of transformation within the diffuse interface. The consistency of the model is proved by the verification of the validity of the condition of positive entropy production and by outcomes of the fluctuation-dissipation theorem. A comparison with existing sharp-interface and diffuse-interface versions of the model is given.
A Poisson-Fault Model for Testing Power Transformers in Service
Directory of Open Access Journals (Sweden)
Dengfu Zhao
2014-01-01
Full Text Available This paper presents a method for assessing the instant failure rate of a power transformer under different working conditions. The method can be applied to a dataset of a power transformer under periodic inspections and maintenance. We use a Poisson-fault model to describe failures of a power transformer. When investigating a Bayes estimate of the instant failure rate under the model, we find that complexities of a classical method and a Monte Carlo simulation are unacceptable. Through establishing a new filtered estimate of Poisson process observations, we propose a quick algorithm of the Bayes estimate of the instant failure rate. The proposed algorithm is tested by simulation datasets of a power transformer. For these datasets, the proposed estimators of parameters of the model have better performance than other estimators. The simulation results reveal the suggested algorithms are quickest among three candidates.
Modelling of magnetostriction of transformer magnetic core for vibration analysis
Marks, Janis; Vitolina, Sandra
2017-12-01
Magnetostriction is a phenomenon occurring in transformer core in normal operation mode. Yet in time, it can cause the delamination of magnetic core resulting in higher level of vibrations that are measured on the surface of transformer tank during diagnostic tests. The aim of this paper is to create a model for evaluating elastic deformations in magnetic core that can be used for power transformers with intensive vibrations in order to eliminate magnetostriction as a their cause. Description of the developed model in Matlab and COMSOL software is provided including restrictions concerning geometry and properties of materials, and the results of performed research on magnetic core anisotropy are provided. As a case study modelling of magnetostriction for 5-legged 200 MVA power transformer with the rated voltage of 13.8/137kV is conducted, based on which comparative analysis of vibration levels and elastic deformations is performed.
Modelling of magnetostriction of transformer magnetic core for vibration analysis
Directory of Open Access Journals (Sweden)
Marks Janis
2017-12-01
Full Text Available Magnetostriction is a phenomenon occurring in transformer core in normal operation mode. Yet in time, it can cause the delamination of magnetic core resulting in higher level of vibrations that are measured on the surface of transformer tank during diagnostic tests. The aim of this paper is to create a model for evaluating elastic deformations in magnetic core that can be used for power transformers with intensive vibrations in order to eliminate magnetostriction as a their cause. Description of the developed model in Matlab and COMSOL software is provided including restrictions concerning geometry and properties of materials, and the results of performed research on magnetic core anisotropy are provided. As a case study modelling of magnetostriction for 5-legged 200 MVA power transformer with the rated voltage of 13.8/137kV is conducted, based on which comparative analysis of vibration levels and elastic deformations is performed.
Clone Detection for Graph-Based Model Transformation Languages
DEFF Research Database (Denmark)
Strüber, Daniel; Plöger, Jennifer; Acretoaie, Vlad
2016-01-01
and analytical quality assurance. From these use cases, we derive a set of key requirements. We describe our customization of existing model clone detection techniques allowing us to address these requirements. Finally, we provide an experimental evaluation, indicating that our customization of ConQAT, one......Cloning is a convenient mechanism to enable reuse across and within software artifacts. On the downside, it is also a practice related to significant long-term maintainability impediments, thus generating a need to identify clones in affected artifacts. A large variety of clone detection techniques...... has been proposed for programming and modeling languages; yet no specific ones have emerged for model transformation languages. In this paper, we explore clone detection for graph-based model transformation languages. We introduce potential use cases for such techniques in the context of constructive...
Analytical method of CIM to PIM transformation in Model Driven Architecture (MDA
Directory of Open Access Journals (Sweden)
Martin Kardos
2010-06-01
Full Text Available Information system’s models on higher level of abstraction have become a daily routine in many software companies. The concept of Model Driven Architecture (MDA published by standardization body OMG1 since 2001 has become a concept for creation of software applications and information systems. MDA specifies four levels of abstraction: top three levels are created as graphical models and the last one as implementation code model. Many research works of MDA are focusing on the lower levels and transformations between each other. The top level of abstraction, called Computation Independent Model (CIM and its transformation to the lower level called Platform Independent Model (PIM is not so extensive research topic. Considering to a great importance and usability of this level in practice of IS2Keywords: transformation, MDA, CIM, PIM, UML, DFD. development now our research activity is focused to this highest level of abstraction – CIM and its possible transformation to the lower PIM level. In this article we are presenting a possible solution of CIM modeling and its analytic method of transformation to PIM.
Komjathy, A.; Yang, Y. M.; Meng, X.; Verkhoglyadova, O. P.; Mannucci, A. J.; Langley, R. B.
2015-12-01
Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface.We will show examples for early detection of natural hazards generated ionospheric signatures using ground-based and space-borne GPS receivers. We will also discuss recent results from the U.S. Real-time Earthquake Analysis for Disaster Mitigation Network (READI) exercises utilizing our algorithms. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage. It is also expected that ionospheric monitoring of TEC perturbations might become an integral part of existing natural hazards warning systems.
Defaultable Game Options in a Hazard Process Model
Directory of Open Access Journals (Sweden)
Tomasz R. Bielecki
2009-01-01
Full Text Available The valuation and hedging of defaultable game options is studied in a hazard process model of credit risk. A convenient pricing formula with respect to a reference filteration is derived. A connection of arbitrage prices with a suitable notion of hedging is obtained. The main result shows that the arbitrage prices are the minimal superhedging prices with sigma martingale cost under a risk neutral measure.
The role of technology and engineering models in transforming healthcare.
Pavel, Misha; Jimison, Holly Brugge; Wactlar, Howard D; Hayes, Tamara L; Barkis, Will; Skapik, Julia; Kaye, Jeffrey
2013-01-01
The healthcare system is in crisis due to challenges including escalating costs, the inconsistent provision of care, an aging population, and high burden of chronic disease related to health behaviors. Mitigating this crisis will require a major transformation of healthcare to be proactive, preventive, patient-centered, and evidence-based with a focus on improving quality-of-life. Information technology, networking, and biomedical engineering are likely to be essential in making this transformation possible with the help of advances, such as sensor technology, mobile computing, machine learning, etc. This paper has three themes: 1) motivation for a transformation of healthcare; 2) description of how information technology and engineering can support this transformation with the help of computational models; and 3) a technical overview of several research areas that illustrate the need for mathematical modeling approaches, ranging from sparse sampling to behavioral phenotyping and early detection. A key tenet of this paper concerns complementing prior work on patient-specific modeling and simulation by modeling neuropsychological, behavioral, and social phenomena. The resulting models, in combination with frequent or continuous measurements, are likely to be key components of health interventions to enhance health and wellbeing and the provision of healthcare.
Warped linear mixed models for the genetic analysis of transformed phenotypes.
Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D; Stegle, Oliver
2014-09-19
Linear mixed models (LMMs) are a powerful and established tool for studying genotype-phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction.
Modeling of austenite to ferrite transformation
Indian Academy of Sciences (India)
395–398. c Indian Academy of Sciences. Modeling of austenite to ferrite transformation. MOHSEN KAZEMINEZHAD. ∗. Department of Materials Science and Engineering, Sharif University of Technology, Azadi Avenue, Tehran, Iran. MS received 17 January 2011; revised 9 July 2011. Abstract. In this research, an algorithm ...
Challenges in Materials Transformation Modeling for Polyolefins Industry
Lai, Shih-Yaw; Swogger, Kurt W.
2004-06-01
Unlike most published polymer processing and/or forming research, the transformation of polyolefins to fabricated articles often involves non-confined flow or so-called free surface flow (e.g. fiber spinning, blown films, and cast films) in which elongational flow takes place during a fabrication process. Obviously, the characterization and validation of extensional rheological parameters and their use to develop rheological constitutive models are the focus of polyolefins materials transformation research. Unfortunately, there are challenges that remain with limited validation for non-linear, non-isothermal constitutive models for polyolefins. Further complexity arises in the transformation of polyolefins in the elongational flow system as it involves stress-induced crystallization process. The complicated nature of elongational, non-linear rheology and non-isothermal crystallization kinetics make the development of numerical methods very challenging for the polyolefins materials forming modeling. From the product based company standpoint, the challenges of materials transformation research go beyond elongational rheology, crystallization kinetics and its numerical modeling. In order to make models useful for the polyolefin industry, it is critical to develop links between molecular parameters to both equipment and materials forming parameters. The recent advances in the constrained geometry catalysis and materials sciences understanding (INSITE technology and molecular design capability) has made industrial polyolefinic materials forming modeling more viable due to the fact that the molecular structure of the polymer can be well predicted and controlled during the polymerization. In this paper, we will discuss inter-relationship (models) among molecular parameters such as polymer molecular weight (Mw), molecular weight distribution (MWD), long chain branching (LCB), short chain branching (SCB or comonomer types and distribution) and their affects on shear and
Tail modeling in a stretched magnetosphere 1. Methods and transformations
International Nuclear Information System (INIS)
Stern, D.P.
1987-01-01
A new method is developed for representing the magnetospheric field B as a distorted dipole field. Because delxB = 0 must be maintained,such a distortion may be viewed as a transformation of the vector potential A. The simplest form is a one-dimensional ''stretch transformation'' along the x axis, a generalization of a method introduced by Voigt. The transformation is concisely represented by the ''stretch function'' f(x), which is also a convenient tool for representing features of the substorm cycle. Onedimensional stretch transformations are extended to spherical, cylindrical, and parabolic coordinates and then to arbitrary coordinates. It is next shown that distortion transformations can be viewed as mappings of field lines from one pattern to another: Euler potentials are used in the derivation, but the final result only requires knowledge of the field and not of the potentials. General transformations in Cartesian and arbitrary coordinates are then derived,and applications to field modeling, field line motion, MHD modeling, and incompressible fluid dynamics are considered. copyrightAmerican Geophysical Union 1987
Transformation of UML Behavioral Diagrams to Support Software Model Checking
Directory of Open Access Journals (Sweden)
Luciana Brasil Rebelo dos Santos
2014-04-01
Full Text Available Unified Modeling Language (UML is currently accepted as the standard for modeling (object-oriented software, and its use is increasing in the aerospace industry. Verification and Validation of complex software developed according to UML is not trivial due to complexity of the software itself, and the several different UML models/diagrams that can be used to model behavior and structure of the software. This paper presents an approach to transform up to three different UML behavioral diagrams (sequence, behavioral state machines, and activity into a single Transition System to support Model Checking of software developed in accordance with UML. In our approach, properties are formalized based on use case descriptions. The transformation is done for the NuSMV model checker, but we see the possibility in using other model checkers, such as SPIN. The main contribution of our work is the transformation of a non-formal language (UML to a formal language (language of the NuSMV model checker towards a greater adoption in practice of formal methods in software development.
Transformation of renormalization groups in 2N-component fermion hierarchical model
International Nuclear Information System (INIS)
Stepanov, R.G.
2006-01-01
The 2N-component fermion model on the hierarchical lattice is studied. The explicit formulae for renormalization groups transformation in the space of coefficients setting the Grassmannian-significant density of the free measure are presented. The inverse transformation of the renormalization group is calculated. The definition of immovable points of renormalization groups is reduced to solving the set of algebraic equations. The interesting connection between renormalization group transformations in boson and fermion hierarchical models is found out. It is shown that one transformation is obtained from other one by the substitution of N on -N [ru
Baum, Rex L.; Miyagi, Toyohiko; Lee, Saro; Trofymchuk, Oleksandr M
2014-01-01
Twenty papers were accepted into the session on landslide hazard mapping for oral presentation. The papers presented susceptibility and hazard analysis based on approaches ranging from field-based assessments to statistically based models to assessments that combined hydromechanical and probabilistic components. Many of the studies have taken advantage of increasing availability of remotely sensed data and nearly all relied on Geographic Information Systems to organize and analyze spatial data. The studies used a range of methods for assessing performance and validating hazard and susceptibility models. A few of the studies presented in this session also included some element of landslide risk assessment. This collection of papers clearly demonstrates that a wide range of approaches can lead to useful assessments of landslide susceptibility and hazard.
Development of transformations from business process models to implementations by reuse
Dirgahayu, T.; Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis; Hammoudi, S.
2007-01-01
This paper presents an approach for developing transformations from business process models to implementations that facilitates reuse. A transformation is developed as a composition of three smaller tasks: pattern recognition, pattern realization and activity transformation. The approach allows one
Transformer modeling for low- and mid-frequency electromagnetic transients simulation
Lambert, Mathieu
In this work, new models are developed for single-phase and three-phase shell-type transformers for the simulation of low-frequency transients, with the use of the coupled leakage model. This approach has the advantage that it avoids the use of fictitious windings to connect the leakage model to a topological core model, while giving the same response in short-circuit as the indefinite admittance matrix (BCTRAN) model. To further increase the model sophistication, it is proposed to divide windings into coils in the new models. However, short-circuit measurements between coils are never available. Therefore, a novel analytical method is elaborated for this purpose, which allows the calculation in 2-D of short-circuit inductances between coils of rectangular cross-section. The results of this new method are in agreement with the results obtained from the finite element method in 2-D. Furthermore, the assumption that the leakage field is approximately 2-D in shell-type transformers is validated with a 3-D simulation. The outcome of this method is used to calculate the self and mutual inductances between the coils of the coupled leakage model and the results are showing good correspondence with terminal short-circuit measurements. Typically, leakage inductances in transformers are calculated from short-circuit measurements and the magnetizing branch is calculated from no-load measurements, assuming that leakages are unimportant for the unloaded transformer and that magnetizing current is negligible during a short-circuit. While the core is assumed to have an infinite permeability to calculate short-circuit inductances, and it is a reasonable assumption since the core's magnetomotive force is negligible during a short-circuit, the same reasoning does not necessarily hold true for leakage fluxes in no-load conditions. This is because the core starts to saturate when the transformer is unloaded. To take this into account, a new analytical method is developed in this
Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne
2014-01-01
The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.
The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms
Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie
2009-01-01
The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with
Hazard function theory for nonstationary natural hazards
Read, Laura K.; Vogel, Richard M.
2016-04-01
Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.
Different methods for modeling absorption heat transformer powered by solar pond
International Nuclear Information System (INIS)
Sencan, Arzu; Kizilkan, Onder; Bezir, Nalan C.; Kalogirou, Soteris A.
2007-01-01
Solar ponds are a type of solar collector used for storing solar energy at temperature below 90 o C. Absorption heat transformers (AHTs) are devices used to increase the temperature of moderately warm fluid to a more useful temperature level. In this study, a theoretical modelling of an absorption heat transformer for the temperature range obtained from an experimental solar pond with dimensions 3.5 x 3.5 x 2 m is presented. The working fluid pair in the absorption heat transformer is aqueous ternary hydroxide fluid consisting of sodium, potassium and caesium hydroxides in the proportions 40:36:24 (NaOH:KOH:CsOH). Different methods such as linear regression (LR), pace regression (PR), sequential minimal optimization (SMO), M5 model tree, M5' rules, decision table and back propagation neural network (BPNN) are used for modelling the absorption heat transformer. The best results were obtained by the back propagation neural network model. A new formulation based on the BPNN is presented to determine the flow ratio (FR) and the coefficient of performance (COP) of the absorption heat transformer. The BPNN procedure is more accurate and requires significantly less computation time than the other methods
Infinite conformal symmetries and Riemann-Hilbert transformation in super principal chiral model
International Nuclear Information System (INIS)
Hao Sanru; Li Wei
1989-01-01
This paper shows a new symmetric transformation - C transformation in super principal chiral model and discover an infinite dimensional Lie algebra related to the Virasoro algebra without central extension. By using the Riemann-Hilbert transformation, the physical origination of C transformation is discussed
Sato Processes in Default Modeling
DEFF Research Database (Denmark)
Kokholm, Thomas; Nicolato, Elisa
In reduced form default models, the instantaneous default intensity is classically the modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature has shown a tendency towards...
Numerical modeling of transformer inrush currents
Energy Technology Data Exchange (ETDEWEB)
Cardelli, E. [Department of Industrial Engineering, University of Perugia, I-06125 Perugia (Italy); Center for Electric and Magnetic Applied Research (Italy); Faba, A., E-mail: faba@unipg.it [Department of Industrial Engineering, University of Perugia, I-06125 Perugia (Italy); Center for Electric and Magnetic Applied Research (Italy)
2014-02-15
This paper presents an application of a vector hysteresis model to the prediction of the inrush current due the arbitrary initial excitation of a transformer after a fault. The approach proposed seems promising in order to predict the transient overshoot in current and the optimal time to close the circuit after the fault.
Analytical calculation of detailed model parameters of cast resin dry-type transformers
International Nuclear Information System (INIS)
Eslamian, M.; Vahidi, B.; Hosseinian, S.H.
2011-01-01
Highlights: → In this paper high frequency behavior of cast resin dry-type transformers was simulated. → Parameters of detailed model were calculated using analytical method and compared with FEM results. → A lab transformer was constructed in order to compare theoretical and experimental results. -- Abstract: Non-flammable characteristic of cast resin dry-type transformers make them suitable for different kind of usages. This paper presents an analytical method of how to obtain parameters of detailed model of these transformers. The calculated parameters are compared and verified with the corresponding FEM results and if it was necessary, correction factors are introduced for modification of the analytical solutions. Transient voltages under full and chopped test impulses are calculated using the obtained detailed model. In order to validate the model, a setup was constructed for testing on high-voltage winding of cast resin dry-type transformer. The simulation results were compared with the experimental data measured from FRA and impulse tests.
Modeling of the Austenite-Martensite Transformation in Stainless and TRIP Steels
Geijselaers, Hubertus J.M.; Hilkhuijsen, P.; Bor, Teunis Cornelis; Perdahcioglu, Emin Semih; van den Boogaard, Antonius H.; Zhang, S.-H.; Liu, X.-H.; Gheng, M.; Li, J.
2013-01-01
The transformation of austenite to martensite is a dominant factor in the description of the constitutive behavior during forming of TRIP assisted steels. To predict this transformation different models are currently available. In this paper the transformation is regarded as a stress induced process
Comprehensive Power Losses Model for Electronic Power Transformer
DEFF Research Database (Denmark)
Yue, Quanyou; Li, Canbing; Cao, Yijia
2018-01-01
and considering the impact of the non-unity power factor and the three-phase unbalanced current, the overall power losses in the distribution network when using the EPT to replace the conventional transformer is analyzed, and the conditions in which the application of the EPT can cause less power losses...... reduced power losses in the distribution network require a comprehensive consideration when comparing the power losses of theEPT and conventional transformer. In this paper, a comprehensive power losses analysis model for the EPT in distribution networks is proposed. By analyzing the EPT self-losses......The electronic power transformer (EPT) has highe rpower losses than the conventional transformer. However, the EPT can correct the power factor, compensate the unbalanced current and reduce the line power losses in the distribution network.Therefore, the higher losses of the EPT and the consequent...
Comprehensive Power Losses Model for Electronic Power Transformer
DEFF Research Database (Denmark)
Yue, Quanyou; Li, Canbing; Cao, Yijia
2018-01-01
The electronic power transformer (EPT) has highe rpower losses than the conventional transformer. However, the EPT can correct the power factor, compensate the unbalanced current and reduce the line power losses in the distribution network.Therefore, the higher losses of the EPT and the consequent...... reduced power losses in the distribution network require a comprehensive consideration when comparing the power losses of theEPT and conventional transformer. In this paper, a comprehensive power losses analysis model for the EPT in distribution networks is proposed. By analyzing the EPT self......-losses and considering the impact of the non-unity power factor and the three-phase unbalanced current, the overall power losses in the distribution network when using the EPT to replace the conventional transformer is analyzed, and the conditions in which the application of the EPT can cause less power losses...
Adaptive control of manipulators handling hazardous waste
International Nuclear Information System (INIS)
Colbaugh, R.; Glass, K.
1994-01-01
This article focuses on developing a robot control system capable of meeting hazardous waste handling application requirements, and presents as a solution an adaptive strategy for controlling the mechanical impedance of kinematically redundant manipulators. The proposed controller is capable of accurate end-effector impedance control and effective redundancy utilization, does not require knowledge of the complex robot dynamic model or parameter values for the robot or the environment, and is implemented without calculation of the robot inverse transformation. Computer simulation results are given for a four degree of freedom redundant robot under adaptive impedance control. These results indicate that the proposed controller is capable of successfully performing important tasks in robotic waste handling applications. (author) 3 figs., 39 refs
Developments in consequence modelling of accidental releases of hazardous materials
Boot, H.
2012-01-01
The modelling of consequences of releases of hazardous materials in the Netherlands has mainly been based on the “Yellow Book”. Although there is no updated version of this official publication, new insights have been developed during the last decades. This article will give an overview of new
Directory of Open Access Journals (Sweden)
R. Hajiabadi
2016-10-01
Full Text Available Introduction One reason for the complexity of hydrological phenomena prediction, especially time series is existence of features such as trend, noise and high-frequency oscillations. These complex features, especially noise, can be detected or removed by preprocessing. Appropriate preprocessing causes estimation of these phenomena become easier. Preprocessing in the data driven models such as artificial neural network, gene expression programming, support vector machine, is more effective because the quality of data in these models is important. Present study, by considering diagnosing and data transformation as two different preprocessing, tries to improve the results of intelligent models. In this study two different intelligent models, Artificial Neural Network and Gene Expression Programming, are applied to estimation of daily suspended sediment load. Wavelet transforms and logarithmic transformation is used for diagnosing and data transformation, respectively. Finally, the impacts of preprocessing on the results of intelligent models are evaluated. Materials and Methods In this study, Gene Expression Programming and Artificial Neural Network are used as intelligent models for suspended sediment load estimation, then the impacts of diagnosing and logarithmic transformations approaches as data preprocessor are evaluated and compared to the result improvement. Two different logarithmic transforms are considered in this research, LN and LOG. Wavelet transformation is used to time series denoising. In order to denoising by wavelet transforms, first, time series can be decomposed at one level (Approximation part and detail part and second, high-frequency part (detail will be removed as noise. According to the ability of gene expression programming and artificial neural network to analysis nonlinear systems; daily values of suspended sediment load of the Skunk River in USA, during a 5-year period, are investigated and then estimated.4 years of
An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest
Suh, Jangwon; Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon
2017-01-01
In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or t...
CyberShake: A Physics-Based Seismic Hazard Model for Southern California
Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.
2011-01-01
CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and
Madrasah Culture Based Transformational Leadership Model
Directory of Open Access Journals (Sweden)
Nur Khoiri
2016-10-01
Full Text Available Leadership is the ability to influence, direct behavior, and have a particular expertise in the field of the group who want to achieve the goals. A dynamic organization requires transformational leadership model. A school principal as a leader at school aims to actualize good learning leadership. Leadership learning focuses on learning which components include curriculum, teaching and learning process, assessment, teacher assessment and development, good service in learning, and developing a learning community in schools based on organizational culture as value, assumption, belief evolved from the roots of member thought of the organization and believed by all members of the organization and implemented in everyday life that could give meaning Keywords: leadership, transformational leadership, headmaster, instructional leadership, organizational culture.
Contribution of physical modelling to climate-driven landslide hazard mapping: an alpine test site
Vandromme, R.; Desramaut, N.; Baills, A.; Hohmann, A.; Grandjean, G.; Sedan, O.; Mallet, J. P.
2012-04-01
The aim of this work is to develop a methodology for integrating climate change scenarios into quantitative hazard assessment and especially their precipitation component. The effects of climate change will be different depending on both the location of the site and the type of landslide considered. Indeed, mass movements can be triggered by different factors. This paper describes a methodology to address this issue and shows an application on an alpine test site. Mechanical approaches represent a solution for quantitative landslide susceptibility and hazard modeling. However, as the quantity and the quality of data are generally very heterogeneous at a regional scale, it is necessary to take into account the uncertainty in the analysis. In this perspective, a new hazard modeling method is developed and integrated in a program named ALICE. This program integrates mechanical stability analysis through a GIS software taking into account data uncertainty. This method proposes a quantitative classification of landslide hazard and offers a useful tool to gain time and efficiency in hazard mapping. However, an expertise approach is still necessary to finalize the maps. Indeed it is the only way to take into account some influent factors in slope stability such as heterogeneity of the geological formations or effects of anthropic interventions. To go further, the alpine test site (Barcelonnette area, France) is being used to integrate climate change scenarios into ALICE program, and especially their precipitation component with the help of a hydrological model (GARDENIA) and the regional climate model REMO (Jacob, 2001). From a DEM, land-cover map, geology, geotechnical data and so forth the program classifies hazard zones depending on geotechnics and different hydrological contexts varying in time. This communication, realized within the framework of Safeland project, is supported by the European Commission under the 7th Framework Programme for Research and Technological
Energy Technology Data Exchange (ETDEWEB)
Shi, Yan; Wu, Tiecheng; Cai, Maolin; Liu, Chong [Beihang University, Beijing (China)
2016-03-15
Hydropneumatic transformer (short for HP transformer) is used to pump pressurized hydraulic oil. Whereas, due to its insufficient usage of energy and low efficiency, a new kind of HP transformer: EEUHP transformer (Expansion energy used hydropneumatic transformer) was proposed. To illustrate the characteristics of the EEUHP transformer, a mathematical model was built. To verify the mathematical model, an experimental prototype was setup and studied. Through simulation and experimental study on the EEUHP transformer, the influence of five key parameters on the output flow of the EEUHP transformer were obtained, and some conclusions can be drawn. Firstly, the mathematical model was proved to be valid. Furthermore, the EEUHP transformer costs fewer of compressed air than the normal HP transformer when the output flow of the two kinds of transformers are almost same. Moreover, with an increase in the output pressure, the output flow decreases sharply. Finally, with an increase in the effective area of hydraulic output port, the output flow increases distinctly. This research can be referred to in the performance and design optimization of the EEUHP transformers.
Modeling emergency evacuation for major hazard industrial sites
International Nuclear Information System (INIS)
Georgiadou, Paraskevi S.; Papazoglou, Ioannis A.; Kiranoudis, Chris T.; Markatos, Nikolaos C.
2007-01-01
A model providing the temporal and spatial distribution of the population under evacuation around a major hazard facility is developed. A discrete state stochastic Markov process simulates the movement of the evacuees. The area around the hazardous facility is divided into nodes connected among themselves with links representing the road system of the area. Transition from node-to-node is simulated as a random process where the probability of transition depends on the dynamically changed states of the destination and origin nodes and on the link between them. Solution of the Markov process provides the expected distribution of the evacuees in the nodes of the area as a function of time. A Monte Carlo solution of the model provides in addition a sample of actual trajectories of the evacuees. This information coupled with an accident analysis which provides the spatial and temporal distribution of the extreme phenomenon following an accident, determines a sample of the actual doses received by the evacuees. Both the average dose and the actual distribution of doses are then used as measures in evaluating alternative emergency response strategies. It is shown that in some cases the estimation of the health consequences by the average dose might be either too conservative or too non-conservative relative to the one corresponding to the distribution of the received dose and hence not a suitable measure to evaluate alternative evacuation strategies
Modelling the interaction between plasticity and the austenite-martensite transformation
Kouznetsova, V.G.; Geers, M.G.D.
2007-01-01
Many advanced steels, such as high strength steels and TRIP steels, owe their excellent combination of strength and ductility to the complex microstructural behaviour involving the austenite to martensite phase transformation. In this paper a physically-based model for martensitic transformation
Sato Processes in Default Modeling
DEFF Research Database (Denmark)
Kokholm, Thomas; Nicolato, Elisa
2010-01-01
In reduced form default models, the instantaneous default intensity is the classical modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature tends to specify the cumulative haz...
An enhanced Brinson model with modified kinetics for martensite transformation
Energy Technology Data Exchange (ETDEWEB)
Kim, Young-Jin; Lee, Jung Ju [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jeong, Ju-Won [Korea Aerospace Research Institute, Daejeon (Korea, Republic of); Lim, Jae Hyuk [Chonbuk National University, Jeonju (Korea, Republic of)
2017-03-15
We propose an enhanced Brinson model with modified kinetics for martensite transformation. Two additional material constants are considered to follow the stress-temperature diagram above austenite start temperature (As) along with treatment to keep the continuity of the martensite volume fraction and the path dependency of the phase transformation. To demonstrate the performance of the proposed model, we implement this algorithm into ABAQUS user subroutine, then conduct several numerical simulations and compare their results with SMA wire experiments as well as those of three-dimensional SMA constitutive models. From the results, it turns out that the proposed model is as accurate as the three-dimensional models and shows better accuracy over original Brinson model in terms of recovery stress.
Hazard interactions and interaction networks (cascades) within multi-hazard methodologies
Gill, Joel C.; Malamud, Bruce D.
2016-08-01
This paper combines research and commentary to reinforce the importance of integrating hazard interactions and interaction networks (cascades) into multi-hazard methodologies. We present a synthesis of the differences between multi-layer single-hazard approaches and multi-hazard approaches that integrate such interactions. This synthesis suggests that ignoring interactions between important environmental and anthropogenic processes could distort management priorities, increase vulnerability to other spatially relevant hazards or underestimate disaster risk. In this paper we proceed to present an enhanced multi-hazard framework through the following steps: (i) description and definition of three groups (natural hazards, anthropogenic processes and technological hazards/disasters) as relevant components of a multi-hazard environment, (ii) outlining of three types of interaction relationship (triggering, increased probability, and catalysis/impedance), and (iii) assessment of the importance of networks of interactions (cascades) through case study examples (based on the literature, field observations and semi-structured interviews). We further propose two visualisation frameworks to represent these networks of interactions: hazard interaction matrices and hazard/process flow diagrams. Our approach reinforces the importance of integrating interactions between different aspects of the Earth system, together with human activity, into enhanced multi-hazard methodologies. Multi-hazard approaches support the holistic assessment of hazard potential and consequently disaster risk. We conclude by describing three ways by which understanding networks of interactions contributes to the theoretical and practical understanding of hazards, disaster risk reduction and Earth system management. Understanding interactions and interaction networks helps us to better (i) model the observed reality of disaster events, (ii) constrain potential changes in physical and social vulnerability
University program in hazardous chemical and radioactive waste management
International Nuclear Information System (INIS)
Parker, F.L.
1987-01-01
The three main functions of a university program are education, training, and research. At Vanderbilt University, there is a Solid and Hazardous Waste option in the Master of Science in Engineering Program. The two main foci are treatment of wastes and environmental transport and transformation of the wastes. Courses in Hazardous Waste Engineering and Radioactive Waste Disposal present a synoptic view of the field, including legal, economic, and institutional aspects as well as the requisite technical content. The training is accomplished for some of the students through the aegis of an internship program sponsored by the US Department of Energy. In the summer between the two academic years of the program, the study works at a facility where decontamination and/or decommissioning and/or remedial actions are taking place. Progress in understanding the movement, transformation, and fate of hazardous materials in the environment is so rapid that it will not be possible to be current in the field without participating in that discovery. Therefore, their students are studying these processes and contributing to new knowledge. Some recent examples are the study of safety factors implicit in assuming a saturated zone below a hazardous waste landfill when an unsaturated zone exists, application of probabilistic risk assessment to three National Priority List sites in Tennessee, and the explanation of why certain organics precede pH, conductivity and nitrates through a clay liner at a hazardous waste disposal site
Bayesian spatial transformation models with applications in neuroimaging data.
Miranda, Michelle F; Zhu, Hongtu; Ibrahim, Joseph G
2013-12-01
The aim of this article is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. The proposed STM include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov random field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. © 2013, The International Biometric Society.
Laplace transform in tracer kinetic modeling
Energy Technology Data Exchange (ETDEWEB)
Hauser, Eliete B., E-mail: eliete@pucrs.br [Instituto do Cerebro (InsCer/FAMAT/PUC-RS), Porto Alegre, RS, (Brazil). Faculdade de Matematica
2013-07-01
The main objective this paper is to quantify the pharmacokinetic processes: absorption, distribution and elimination of radiopharmaceutical(tracer), using Laplace transform method. When the drug is administered intravenously absorption is complete and is available in the bloodstream to be distributed throughout the whole body in all tissues and fluids, and to be eliminated. Mathematical modeling seeks to describe the processes of distribution and elimination through compartments, where distinct pools of tracer (spatial location or chemical state) are assigned to different compartments. A compartment model is described by a system of differential equations, where each equation represents the sum of all the transfer rates to and from a specific compartment. In this work a two-tissue irreversible compartment model is used for description of tracer, [{sup 18}F]2-fluor-2deoxy-D-glucose. In order to determine the parameters of the model, it is necessary to have information about the tracer delivery in the form of an input function representing the time-course of tracer concentration in arterial blood or plasma. We estimate the arterial input function in two stages and apply the Levenberg-Marquardt Method to solve nonlinear regressions. The transport of FDG across de arterial blood is very fast in the first ten minutes and then decreases slowly. We use de Heaviside function to represent this situation and this is the main contribution of this study. We apply the Laplace transform and the analytical solution for two-tissue irreversible compartment model is obtained. The only approach is to determinate de arterial input function. (author)
Modelling and Simulation of the Diode Split Transformer
DEFF Research Database (Denmark)
Østergaard, Leo
a significant influence on the picture quality. The most critical component is undoubtedly the diode split transformer (DST). Therefore, if developing a simulation model of the DST is possible, a significant step has been taken in the attempt to model the entire horizontal deflection circuit and to obtain...
International Nuclear Information System (INIS)
Decker, K.; Hirata, K.; Groudev, P.
2016-01-01
The current report provides guidance for the assessment of seismo-tectonic hazards in level 1 and 2 PSA. The objective is to review existing guidance, identify methodological challenges, and to propose novel guidance on key issues. Guidance for the assessment of vibratory ground motion and fault capability comprises the following: - listings of data required for the hazard assessment and methods to estimate data quality and completeness; - in-depth discussion of key input parameters required for hazard models; - discussions on commonly applied hazard assessment methodologies; - references to recent advances of science and technology. Guidance on the assessment of correlated or coincident hazards comprises of chapters on: - screening of correlated hazards; - assessment of correlated hazards (natural and man-made); - assessment of coincident hazards. (authors)
Coaching Model + Clinical Playbook = Transformative Learning.
Fletcher, Katherine A; Meyer, Mary
2016-01-01
Health care employers demand that workers be skilled in clinical reasoning, able to work within complex interprofessional teams to provide safe, quality patient-centered care in a complex evolving system. To this end, there have been calls for radical transformation of nursing education including the development of a baccalaureate generalist nurse. Based on recommendations from the American Association of Colleges of Nursing, faculty concluded that clinical education must change moving beyond direct patient care by applying the concepts associated with designer, manager, and coordinator of care and being a member of a profession. To accomplish this, the faculty utilized a system of focused learning assignments (FLAs) that present transformative learning opportunities that expose students to "disorienting dilemmas," alternative perspectives, and repeated opportunities to reflect and challenge their own beliefs. The FLAs collected in a "Playbook" were scaffolded to build the student's competencies over the course of the clinical experience. The FLAs were centered on the 6 Quality and Safety Education for Nurses competencies, with 2 additional concepts of professionalism and systems-based practice. The FLAs were competency-based exercises that students performed when not assigned to direct patient care or had free clinical time. Each FLA had a lesson plan that allowed the student and faculty member to see the competency addressed by the lesson, resources, time on task, student instructions, guide for reflection, grading rubric, and recommendations for clinical instructor. The major advantages of the model included (a) consistent implementation of structured learning experiences by a diverse teaching staff using a coaching model of instruction; (b) more systematic approach to present learning activities that build upon each other; (c) increased time for faculty to interact with students providing direct patient care; (d) guaranteed capture of selected transformative
Multiple Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California
Pike, Richard J.; Graymer, Russell W.
2008-01-01
With the exception of Los Angeles, perhaps no urban area in the United States is more at risk from landsliding, triggered by either precipitation or earthquake, than the San Francisco Bay region of northern California. By January each year, seasonal winter storms usually bring moisture levels of San Francisco Bay region hillsides to the point of saturation, after which additional heavy rainfall may induce landslides of various types and levels of severity. In addition, movement at any time along one of several active faults in the area may generate an earthquake large enough to trigger landslides. The danger to life and property rises each year as local populations continue to expand and more hillsides are graded for development of residential housing and its supporting infrastructure. The chapters in the text consist of: *Introduction by Russell W. Graymer *Chapter 1 Rainfall Thresholds for Landslide Activity, San Francisco Bay Region, Northern California by Raymond C. Wilson *Chapter 2 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike and Steven Sobieszczyk *Chapter 3 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven Sobieszczyk *Chapter 4 Landslide Hazard Modeled for the Cities of Oakland, Piedmont, and Berkeley, Northern California, from a M=7.1 Scenario Earthquake on the Hayward Fault Zone by Scott B. Miles and David K. Keefer *Chapter 5 Synthesis of Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike The plates consist of: *Plate 1 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike, Russell W. Graymer, Sebastian Roberts, Naomi B. Kalman, and Steven Sobieszczyk *Plate 2 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven
Directory of Open Access Journals (Sweden)
Max Mäuhlhäuser
2011-01-01
Full Text Available Developing applications comprising service composition is a complex task. Therefore, to lower the skill barrier for developers, it is important to describe the problem at hand on an abstract level and not to focus on implementation details. This can be done using declarative programming which allows to describe only the result of the problem (which is what the developer wants rather than the description of the implementation. We therefore use purely declarative model-to-model transformations written in a universal model transformation language which is capable of handling even non functional properties using optimization and mathematical programming. This makes it easier to understand and describe service composition and non-functional properties for the developer.
Efficient Finite Element Models for Calculation of the No-load losses of the Transformer
Directory of Open Access Journals (Sweden)
Kamran Dawood
2017-10-01
Full Text Available Different transformer models are examined for the calculation of the no-load losses using finite element analysis. Two-dimensional and three-dimensional finite element analyses are used for the simulation of the transformer. Results of the finite element method are also compared with the experimental results. The Result shows that 3-dimensional provide high accuracy as compared to the 2 dimensional full and half model. However, the 2-dimensional half model is the less time-consuming method as compared to the 3 and 2-dimensional full model. Simulation time duration taken by the different models of the transformer is also compared. The difference between the 3-dimensional finite element method and experimental results are less than 3%. These numerical methods can help transformer designers to minimize the development of the prototype transformers.
St. Louis area earthquake hazards mapping project; seismic and liquefaction hazard maps
Cramer, Chris H.; Bauer, Robert A.; Chung, Jae-won; Rogers, David; Pierce, Larry; Voigt, Vicki; Mitchell, Brad; Gaunt, David; Williams, Robert; Hoffman, David; Hempen, Gregory L.; Steckel, Phyllis; Boyd, Oliver; Watkins, Connor M.; Tucker, Kathleen; McCallister, Natasha
2016-01-01
We present probabilistic and deterministic seismic and liquefaction hazard maps for the densely populated St. Louis metropolitan area that account for the expected effects of surficial geology on earthquake ground shaking. Hazard calculations were based on a map grid of 0.005°, or about every 500 m, and are thus higher in resolution than any earlier studies. To estimate ground motions at the surface of the model (e.g., site amplification), we used a new detailed near‐surface shear‐wave velocity model in a 1D equivalent‐linear response analysis. When compared with the 2014 U.S. Geological Survey (USGS) National Seismic Hazard Model, which uses a uniform firm‐rock‐site condition, the new probabilistic seismic‐hazard estimates document much more variability. Hazard levels for upland sites (consisting of bedrock and weathered bedrock overlain by loess‐covered till and drift deposits), show up to twice the ground‐motion values for peak ground acceleration (PGA), and similar ground‐motion values for 1.0 s spectral acceleration (SA). Probabilistic ground‐motion levels for lowland alluvial floodplain sites (generally the 20–40‐m‐thick modern Mississippi and Missouri River floodplain deposits overlying bedrock) exhibit up to twice the ground‐motion levels for PGA, and up to three times the ground‐motion levels for 1.0 s SA. Liquefaction probability curves were developed from available standard penetration test data assuming typical lowland and upland water table levels. A simplified liquefaction hazard map was created from the 5%‐in‐50‐year probabilistic ground‐shaking model. The liquefaction hazard ranges from low (60% of area expected to liquefy) in the lowlands. Because many transportation routes, power and gas transmission lines, and population centers exist in or on the highly susceptible lowland alluvium, these areas in the St. Louis region are at significant potential risk from seismically induced liquefaction and associated
Directory of Open Access Journals (Sweden)
Milevski Ivica
2013-01-01
Full Text Available In this paper, one approach of Geographic Information System (GIS and Remote Sensing (RS assessment of potential natural hazard areas (excess erosion, landslides, flash floods and fires is presented. For that purpose Pehchevo Municipality in the easternmost part of the Republic of Macedonia is selected as a case study area because of high local impact of natural hazards on the environment, social-demographic situation and local economy. First of all, most relevant static factors for each type of natural hazard are selected (topography, land cover, anthropogenic objects and infrastructure. With GIS and satellite imagery, multi-layer calculation is performed based on available traditional equations, clustering or discreditation procedures. In such way suitable relatively “static” natural hazard maps (models are produced. Then, dynamic (mostly climate related factors are included in previous models resulting in appropriate scenarios correlated with different amounts of precipitation, temperature, wind direction etc. Finally, GIS based scenarios are evaluated and tested with field check or very fine resolution Google Earth imagery showing good accuracy. Further development of such GIS models in connection with automatic remote meteorological stations and dynamic satellite imagery (like MODIS will provide on-time warning for coming natural hazard avoiding potential damages or even causalities.
Theory of the Anderson impurity model: The Schrieffer endash Wolff transformation reexamined
International Nuclear Information System (INIS)
Kehrein, S.K.; Mielke, A.
1996-01-01
We test the method of infinitesimal unitary transformations recently introduced by Wegner on the Anderson single impurity model. It is demonstrated that infinitesimal unitary transformations in contrast to the Schrieffer endash Wolff transformation allow the construction of an effective Kondo Hamiltonian consistent with the established results in this well understood model. The main reason for this is the intrinsic energy scale separation of Wegner close-quote s approach with respect to arbitrary energy differences coupled by matrix elements. This allows the construction of an effective Hamiltonian without facing a vanishing energy denominator problem. Similar energy denominator problems are troublesome in many models. Infinitesimal unitary transformations have the potential to provide a general framework for the systematic derivation of effective Hamiltonians without such problems. Copyright copyright 1996 Academic Press, Inc
Plasticity induced by phase transformation in steel: experiment vs modeling
International Nuclear Information System (INIS)
Tahimi, Abdeladhim
2011-01-01
The objectives of this work are: (i) understand the mechanisms and phenomena involved in the plasticity of steels in the presence of a diffusive or martensitic phase transformation. (ii) develop tools for predicting TRIP, which are able to correctly reproduce the macroscopic deformation for cases of complex loading and could also provide information about local elasto-visco-plastic interactions between product and parent phases. To this purpose, new experimental tests are conducted on 35NCD16 steel for austenite to martensite transformation and on 100C6 steel for austenite to pearlite transformation. The elasto viscoplastic properties of austenite and pearlite of the 100C6 steel are characterized through tension compression and relaxation tests. The parameters of macro-homogeneous and crystal-based constitutive laws could then be identified such as to analyse different models with respect to the experimental TRIP: the analytical models of Leblond (1989) and Taleb and Sidoroff (2003) but also, above all, different numerical models which can be distinguished by the prevailing assumptions concerning the local kinetics and the constitutive laws. An extension of the single-grain model dedicated to martensitic transformations developed during the thesis of S. Meftah (2007) is proposed. It consists in introducing the polycrystalline character of the austenite through a process of homogenization based on a self-consistent scheme by calculating the properties of an Equivalent Homogeneous Medium environment (EHM). (author)
Uncertainty on shallow landslide hazard assessment: from field data to hazard mapping
Trefolini, Emanuele; Tolo, Silvia; Patelli, Eduardo; Broggi, Matteo; Disperati, Leonardo; Le Tuan, Hai
2015-04-01
Shallow landsliding that involve Hillslope Deposits (HD), the surficial soil that cover the bedrock, is an important process of erosion, transport and deposition of sediment along hillslopes. Despite Shallow landslides generally mobilize relatively small volume of material, they represent the most hazardous factor in mountain regions due to their high velocity and the common absence of warning signs. Moreover, increasing urbanization and likely climate change make shallow landslides a source of widespread risk, therefore the interest of scientific community about this process grown in the last three decades. One of the main aims of research projects involved on this topic, is to perform robust shallow landslides hazard assessment for wide areas (regional assessment), in order to support sustainable spatial planning. Currently, three main methodologies may be implemented to assess regional shallow landslides hazard: expert evaluation, probabilistic (or data mining) methods and physical models based methods. The aim of this work is evaluate the uncertainty of shallow landslides hazard assessment based on physical models taking into account spatial variables such as: geotechnical and hydrogeologic parameters as well as hillslope morphometry. To achieve this goal a wide dataset of geotechnical properties (shear strength, permeability, depth and unit weight) of HD was gathered by integrating field survey, in situ and laboratory tests. This spatial database was collected from a study area of about 350 km2 including different bedrock lithotypes and geomorphological features. The uncertainty associated to each step of the hazard assessment process (e.g. field data collection, regionalization of site specific information and numerical modelling of hillslope stability) was carefully characterized. The most appropriate probability density function (PDF) was chosen for each numerical variable and we assessed the uncertainty propagation on HD strength parameters obtained by
Faruk, Alfensi
2018-03-01
Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.
A decision model for the risk management of hazardous processes
International Nuclear Information System (INIS)
Holmberg, J.E.
1997-03-01
A decision model for risk management of hazardous processes as an optimisation problem of a point process is formulated in the study. In the approach, the decisions made by the management are divided into three categories: (1) planned process lifetime, (2) selection of the design and, (3) operational decisions. These three controlling methods play quite different roles in the practical risk management, which is also reflected in our approach. The optimisation of the process lifetime is related to the licensing problem of the process. It provides a boundary condition for a feasible utility function that is used as the actual objective function, i.e., maximizing the process lifetime utility. By design modifications, the management can affect the inherent accident hazard rate of the process. This is usually a discrete optimisation task. The study particularly concentrates upon the optimisation of the operational strategies given a certain design and licensing time. This is done by a dynamic risk model (marked point process model) representing the stochastic process of events observable or unobservable to the decision maker. An optimal long term control variable guiding the selection of operational alternatives in short term problems is studied. The optimisation problem is solved by the stochastic quasi-gradient procedure. The approach is illustrated by a case study. (23 refs.)
Legaz-García, María del Carmen; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Chute, Christopher G; Tao, Cui
2015-05-01
The semantic interoperability of electronic healthcare records (EHRs) systems is a major challenge in the medical informatics area. International initiatives pursue the use of semantically interoperable clinical models, and ontologies have frequently been used in semantic interoperability efforts. The objective of this paper is to propose a generic, ontology-based, flexible approach for supporting the automatic transformation of clinical models, which is illustrated for the transformation of Clinical Element Models (CEMs) into openEHR archetypes. Our transformation method exploits the fact that the information models of the most relevant EHR specifications are available in the Web Ontology Language (OWL). The transformation approach is based on defining mappings between those ontological structures. We propose a way in which CEM entities can be transformed into openEHR by using transformation templates and OWL as common representation formalism. The transformation architecture exploits the reasoning and inferencing capabilities of OWL technologies. We have devised a generic, flexible approach for the transformation of clinical models, implemented for the unidirectional transformation from CEM to openEHR, a series of reusable transformation templates, a proof-of-concept implementation, and a set of openEHR archetypes that validate the methodological approach. We have been able to transform CEM into archetypes in an automatic, flexible, reusable transformation approach that could be extended to other clinical model specifications. We exploit the potential of OWL technologies for supporting the transformation process. We believe that our approach could be useful for international efforts in the area of semantic interoperability of EHR systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Transforming community access to space science models
MacNeice, Peter; Hesse, Michael; Kuznetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti
2012-04-01
Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.
Dilatation transformation in the string model
Energy Technology Data Exchange (ETDEWEB)
Chikashige, Y [Tokyo Univ. (Japan). Coll. of General Education; Hosoda, M; Saito, S
1975-05-01
Dilatation transformation is discussed in the string model. We show that it becomes meaningful in the infinite slope limit of Regge trajectories for the motion of a free string. It turns out to be equivalent to the high energy limit of the dual amplitudes, with the Regge slope kept finite, in the case of interacting strings. The scaling phenomenon is explained from this point of view.
Evaluation of Rule-based Modularization in Model Transformation Languages illustrated with ATL
Ivanov, Ivan; van den Berg, Klaas; Jouault, Frédéric
This paper studies ways for modularizing transformation definitions in current rule-based model transformation languages. Two scenarios are shown in which the modular units are identified on the base of the relations between source and target metamodels and on the base of generic transformation
Tourism facing the challenge of recurring natural hazards: a view from Cancún
Frank Babinger
2012-01-01
This article discusses the duality between economic development based on tourism and the impact of land occupation at the expense of an environment that includes specific natural hazards. The transformation of coastal areas to be occupied by tourism is one of the serious problems which are not taken into account when planning the activity. Cancún is a paradigmatic model in which an explosive growth in tourists, residents and tourist buildings has led to the massive occupation of a coastal are...
Pal, Parimal; Das, Pallabi; Chakrabortty, Sankha; Thakura, Ritwik
2016-11-01
Dynamic modelling and simulation of a nanofiltration-forward osmosis integrated complete system was done along with economic evaluation to pave the way for scale up of such a system for treating hazardous pharmaceutical wastes. The system operated in a closed loop not only protects surface water from the onslaught of hazardous industrial wastewater but also saves on cost of fresh water by turning wastewater recyclable at affordable price. The success of dynamic modelling in capturing the relevant transport phenomena is well reflected in high overall correlation coefficient value (R 2 > 0.98), low relative error (osmosis loop at a reasonably high flux of 56-58 l per square meter per hour.
Measures to assess the prognostic ability of the stratified Cox proportional hazards model
DEFF Research Database (Denmark)
(Tybjaerg-Hansen, A.) The Fibrinogen Studies Collaboration.The Copenhagen City Heart Study; Tybjærg-Hansen, Anne
2009-01-01
Many measures have been proposed to summarize the prognostic ability of the Cox proportional hazards (CPH) survival model, although none is universally accepted for general use. By contrast, little work has been done to summarize the prognostic ability of the stratified CPH model; such measures...
Modeling and analysis on ring-type piezoelectric transformers.
Ho, Shine-Tzong
2007-11-01
This paper presents an electromechanical model for a ring-type piezoelectric transformer (PT). To establish this model, vibration characteristics of the piezoelectric ring with free boundary conditions are analyzed in advance. Based on the vibration analysis of the piezoelectric ring, the operating frequency and vibration mode of the PT are chosen. Then, electromechanical equations of motion for the PT are derived based on Hamilton's principle, which can be used to simulate the coupled electromechanical system for the transformer. Such as voltage stepup ratio, input impedance, output impedance, input power, output power, and efficiency are calculated by the equations. The optimal load resistance and the maximum efficiency for the PT will be presented in this paper. Experiments also were conducted to verify the theoretical analysis, and a good agreement was obtained.
Conceptual geoinformation model of natural hazards risk assessment
Kulygin, Valerii
2016-04-01
Natural hazards are the major threat to safe interactions between nature and society. The assessment of the natural hazards impacts and their consequences is important in spatial planning and resource management. Today there is a challenge to advance our understanding of how socio-economical and climate changes will affect the frequency and magnitude of hydro-meteorological hazards and associated risks. However, the impacts from different types of natural hazards on various marine and coastal economic activities are not of the same type. In this study, the conceptual geomodel of risk assessment is presented to highlight the differentiation by the type of economic activities in extreme events risk assessment. The marine and coastal ecosystems are considered as the objects of management, on the one hand, and as the place of natural hazards' origin, on the other hand. One of the key elements in describing of such systems is the spatial characterization of their components. Assessment of ecosystem state is based on ecosystem indicators (indexes). They are used to identify the changes in time. The scenario approach is utilized to account for the spatio-temporal dynamics and uncertainty factors. Two types of scenarios are considered: scenarios of using ecosystem services by economic activities and scenarios of extreme events and related hazards. The reported study was funded by RFBR, according to the research project No. 16-35-60043 mol_a_dk.
A balance principle approach for modeling phase transformation kinetics
International Nuclear Information System (INIS)
Lusk, M.; Krauss, G.; Jou, H.J.
1995-01-01
A balance principle is offered to model volume fraction kinetics of phase transformation kinetics at a continuum level. This microbalance provides a differential equation for transformation kinetics which is coupled to the differential equations governing the mechanical and thermal aspects of the process. Application here is restricted to diffusive transformations for the sake of clarity, although the principle is discussed for martensitic phase transitions as well. Avrami-type kinetics are shown to result from a special class of energy functions. An illustrative example using a 0.5% C Chromium steel demonstrates how TTT and CCT curves can be generated using a particularly simple effective energy function. (orig.)
MODELING CONTROLLED ASYNCHRONOUS ELECTRIC DRIVES WITH MATCHING REDUCERS AND TRANSFORMERS
Directory of Open Access Journals (Sweden)
V. S. Petrushin
2015-04-01
Full Text Available Purpose. Working out of mathematical models of the speed-controlled induction electric drives ensuring joint consideration of transformers, motors and loadings, and also matching reducers and transformers, both in static, and in dynamic regimes for the analysis of their operating characteristics. Methodology. At mathematical modelling are considered functional, mass, dimensional and cost indexes of reducers and transformers that allows observing engineering and economic aspects of speed-controlled induction electric drives. The mathematical models used for examination of the transitive electromagnetic and electromechanical processes, are grounded on systems of nonlinear differential equations with nonlinear coefficients (parameters of equivalent circuits of motors, varying in each operating point, including owing to appearances of saturation of magnetic system and current displacement in a winding of a rotor of an induction motor. For the purpose of raise of level of adequacy of models a magnetic circuit iron, additional and mechanical losses are considered. Results. Modelling of the several speed-controlled induction electric drives, different by components, but working on a loading equal on character, magnitude and a demanded control range is executed. At use of characteristic families including mechanical, at various parameters of regulating on which performances of the load mechanism are superimposed, the adjusting characteristics representing dependences of a modification of electrical, energy and thermal magnitudes from an angular speed of motors are gained. Originality. The offered complex models of speed-controlled induction electric drives with matching reducers and transformers, give the chance to realize well-founded sampling of components of drives. They also can be used as the design models by working out of speed-controlled induction motors. Practical value. Operating characteristics of various speed-controlled induction electric
Laplace transform analysis of a multiplicative asset transfer model
Sokolov, Andrey; Melatos, Andrew; Kieu, Tien
2010-07-01
We analyze a simple asset transfer model in which the transfer amount is a fixed fraction f of the giver’s wealth. The model is analyzed in a new way by Laplace transforming the master equation, solving it analytically and numerically for the steady-state distribution, and exploring the solutions for various values of f∈(0,1). The Laplace transform analysis is superior to agent-based simulations as it does not depend on the number of agents, enabling us to study entropy and inequality in regimes that are costly to address with simulations. We demonstrate that Boltzmann entropy is not a suitable (e.g. non-monotonic) measure of disorder in a multiplicative asset transfer system and suggest an asymmetric stochastic process that is equivalent to the asset transfer model.
Flood Modelling of Banjir Kanal Barat (Integration of Hydrology Model and GIS
Directory of Open Access Journals (Sweden)
Muhammad Aris Marfai
2004-01-01
Full Text Available Hydrological modelling has an advantage on river flood study. Hydrological factors can be easily determined and calculated using hydrological model. HEC-RAS (Hydrological Engineering Centre-River Analysis System software is well known as hydrological modelling software for flood simulation and encroachment analysis of the floodplain area. For spatial performance and analysis of flood, the integration of the Geographic Information Systems (GIS and hydrological model is needed. The aims of this research are 1 to perform a flood encroachment using HEC-RAS software, and 2 to generate a flood hazard map. The methodology for this research omprise of 1 generating geometric data as a requirement of the data input on HEC-RAS hydrological model, 2 Hydrological data inputting, 3 generating of the flood encroachment analysis, and 4 transformation of flood encroachment into flood hazard map. The spatial pattern of the flood hazard is illustrated in a map. The result shows that hydrological model as integration with GIS can be used for flood hazard map generation. This method has advantages on the calculation of the hydrological factors of flood and spatial performance of the flood hazard map. For further analysis, the landuse map can be used on the overlay operation with the flood hazard map in order to obtain the impact of the flood on the landuse.
Madrasah Culture Based Transformational Leadership Model
Nur Khoiri
2016-01-01
Leadership is the ability to influence, direct behavior, and have a particular expertise in the field of the group who want to achieve the goals. A dynamic organization requires transformational leadership model. A school principal as a leader at school aims to actualize good learning leadership. Leadership learning focuses on learning which components include curriculum, teaching and learning process, assessment, teacher assessment and development, good service in learning, and developing a ...
Prospective of Transformation of Current Models of the Global Pharmaceutical Market
Directory of Open Access Journals (Sweden)
Yuriy Solodkovskyy
2012-02-01
Full Text Available This article thoroughly analyzes the current state of the global pharmaceutical market, defines the key factors for its development and outlines the promising areas of transformation of existing business models of top companies. The forecasted data relating to the market development until 2015 have been investigated. The global, market, technological and organizational factors of transformation of modern model of the global pharmaceutical market have been identified.
Zolfaghari, Mohammad R.
2009-07-01
Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.
DEFF Research Database (Denmark)
Hagemann, Kit; Scholderer, Joachim
and experts understanding of benefits and risks associated with three Novel foods (a potato, rice and functional food ingredients) using a relatively new methodology for the study of risk perception called Mental models. Mental models focus on the way people conceptualise hazardous processes and allows...... researchers to pit a normative analysis (expert mental models) against a descriptive analysis (consumer mental models). Expert models were elicited by means of a three-wave Delphi procedure from altogether 24 international experts and consumers models from in-dept interviews with Danish consumers. The results...... revealed that consumers´ and experts' mental models differed in connection to scope. Experts focused on the types of hazards for which risk assessments can be conducted under current legal frameworks whereas consumers were concerned about issues that lay outside the scope of current legislation. Experts...
ASCHFLOW - A dynamic landslide run-out model for medium scale hazard analysis
Czech Academy of Sciences Publication Activity Database
Quan Luna, B.; Blahůt, Jan; van Asch, T.W.J.; van Westen, C.J.; Kappes, M.
2016-01-01
Roč. 3, 12 December (2016), č. článku 29. E-ISSN 2197-8670 Institutional support: RVO:67985891 Keywords : landslides * run-out models * medium scale hazard analysis * quantitative risk assessment Subject RIV: DE - Earth Magnetism, Geodesy, Geography
Modelling and analysis of the transformer current resonance in dual active bridge converters
DEFF Research Database (Denmark)
Qin, Zian; Shen, Zhan; Blaabjerg, Frede
2017-01-01
Due to the parasitic capacitances of the transformer and inductor in Dual Active Bridge (DAB) converters, resonance happens in the transformer currents. This high frequency resonant current flowing into the full bridges will worsen their soft-switching performance and thereby reduce its efficiency....... In order to study the generation mechanism of this current resonance, the impedance of the transformer and inductor with parasitic components is modelled in this digest. Then, based on the impedance model, an approach is proposed to mitigate the current resonance. Finally, both the impedance model...
Mergili, Martin; Schneider, Demian; Andres, Norina; Worni, Raphael; Gruber, Fabian; Schneider, Jean F.
2010-05-01
the outburst of landslide-dammed lakes) remains a challenge: • The knowledge about the onset of the process is often limited (bathymetry of the lakes, subsurface water, properties of dam (content of ice), type of dam breach, understanding of process chains and interactions). • The size of glacial lakes may change rapidly but continuously, and many lakes break out within a short time after their development. Continuous monitoring is therefore required to keep updated on the existing hazards. • Also the outburst of small glacial lakes may lead to significant debris floods or even debris flows if there is plenty of erodible material available. • The available modeling software packages are of limited suitability for lake outburst floods: e.g. software developed by the hydrological community is specialized to simulate (debris) floods with input hydrographs on moderately steep flow channels and with lower sediment loads. In contrast to this, programs for rapid mass movements are better suited on steeper slopes and sudden onset of the movement. The typical characteristics of GLOFs are in between and vary for different channel sections. In summary, the major bottlenecks remain in deriving realistic or worst case scenarios and predicting their magnitude and area of impact. This mainly concerns uncertainties in the dam break process, involved volumes, erosion rates, changing rheologies, and the limited capabilities of available software packages to simulate process interactions and transformations such as the development of a hyperconcentrated flow into a debris flow. In addition, many areas prone to lake outburst floods are located in developing countries with a limited scope of the threatened population for decision-making and limited resources for mitigation.
NitroScape: A model to integrate nitrogen transfers and transformations in rural landscapes
Energy Technology Data Exchange (ETDEWEB)
Duretz, S. [INRA-AgroParisTech, UMR 1091 Environnement et Grandes Cultures (EGC), 78850 Thiverval-Grignon (France); Drouet, J.L., E-mail: Jean-Louis.Drouet@grignon.inra.fr [INRA-AgroParisTech, UMR 1091 Environnement et Grandes Cultures (EGC), 78850 Thiverval-Grignon (France); Durand, P. [INRA-AgroCampus, UMR 1069 Sol Agro et hydrosysteme Spatialisation (SAS), 35042 Rennes cedex (France); Hutchings, N.J. [Department of Agroecology, Faculty of Agricultural Sciences, University of Aarhus (AU), Blichers Alle, 8830 Tjele (Denmark); Theobald, M.R. [Department of Chemistry and Agricultural Analysis, Technical University of Madrid (UPM), 28040 Madrid (Spain); Centre for Ecology and Hydrology (CEH), Bush Estate, Penicuik, Midlothian EH26 0QB (United Kingdom); Salmon-Monviola, J. [INRA-AgroCampus, UMR 1069 Sol Agro et hydrosysteme Spatialisation (SAS), 35042 Rennes cedex (France); Dragosits, U. [Centre for Ecology and Hydrology (CEH), Bush Estate, Penicuik, Midlothian EH26 0QB (United Kingdom); Maury, O. [INRA-AgroParisTech, UMR 1091 Environnement et Grandes Cultures (EGC), 78850 Thiverval-Grignon (France); Sutton, M.A. [Centre for Ecology and Hydrology (CEH), Bush Estate, Penicuik, Midlothian EH26 0QB (United Kingdom); Cellier, P. [INRA-AgroParisTech, UMR 1091 Environnement et Grandes Cultures (EGC), 78850 Thiverval-Grignon (France)
2011-11-15
Modelling nitrogen transfer and transformation at the landscape scale is relevant to estimate the mobility of the reactive forms of nitrogen (N{sub r}) and the associated threats to the environment. Here we describe the development of a spatially and temporally explicit model to integrate N{sub r} transfer and transformation at the landscape scale. The model couples four existing models, to simulate atmospheric, farm, agro-ecosystem and hydrological N{sub r} fluxes and transformations within a landscape. Simulations were carried out on a theoretical landscape consisting of pig-crop farms interspersed with unmanaged ecosystems. Simulation results illustrated the effect of spatial interactions between landscape elements on N{sub r} fluxes and losses to the environment. More than 10% of the total N{sub 2}O emissions were due to indirect emissions. The nitrogen budgets and transformations of the unmanaged ecosystems varied considerably, depending on their location within the landscape. The model represents a new tool for assessing the effect of changes in landscape structure on N{sub r} fluxes. - Highlights: > The landscape scale is relevant to study how spatial interactions affect N{sub r} fate. > The NitroScape model integrates N{sub r} transfer and transformation at landscape scale. > NitroScape couples existing atmospheric, farm, agro-ecosystem and hydrological models. > Data exchanges within NitroScape are dynamic and spatially distributed. > More than 10% of the simulated N{sub 2}O emissions are due to indirect emissions. - A model integrating terrestrial, hydrological and atmospheric processes of N{sub r} transfer and transformation at the landscape scale has been developed to simulate the effect of spatial interactions between landscape elements on N{sub r} fate.
NitroScape: A model to integrate nitrogen transfers and transformations in rural landscapes
International Nuclear Information System (INIS)
Duretz, S.; Drouet, J.L.; Durand, P.; Hutchings, N.J.; Theobald, M.R.; Salmon-Monviola, J.; Dragosits, U.; Maury, O.; Sutton, M.A.; Cellier, P.
2011-01-01
Modelling nitrogen transfer and transformation at the landscape scale is relevant to estimate the mobility of the reactive forms of nitrogen (N r ) and the associated threats to the environment. Here we describe the development of a spatially and temporally explicit model to integrate N r transfer and transformation at the landscape scale. The model couples four existing models, to simulate atmospheric, farm, agro-ecosystem and hydrological N r fluxes and transformations within a landscape. Simulations were carried out on a theoretical landscape consisting of pig-crop farms interspersed with unmanaged ecosystems. Simulation results illustrated the effect of spatial interactions between landscape elements on N r fluxes and losses to the environment. More than 10% of the total N 2 O emissions were due to indirect emissions. The nitrogen budgets and transformations of the unmanaged ecosystems varied considerably, depending on their location within the landscape. The model represents a new tool for assessing the effect of changes in landscape structure on N r fluxes. - Highlights: → The landscape scale is relevant to study how spatial interactions affect N r fate. → The NitroScape model integrates N r transfer and transformation at landscape scale. → NitroScape couples existing atmospheric, farm, agro-ecosystem and hydrological models. → Data exchanges within NitroScape are dynamic and spatially distributed. → More than 10% of the simulated N 2 O emissions are due to indirect emissions. - A model integrating terrestrial, hydrological and atmospheric processes of N r transfer and transformation at the landscape scale has been developed to simulate the effect of spatial interactions between landscape elements on N r fate.
Agent-based simulation for human-induced hazard analysis.
Bulleit, William M; Drewek, Matthew W
2011-02-01
Terrorism could be treated as a hazard for design purposes. For instance, the terrorist hazard could be analyzed in a manner similar to the way that seismic hazard is handled. No matter how terrorism is dealt with in the design of systems, the need for predictions of the frequency and magnitude of the hazard will be required. And, if the human-induced hazard is to be designed for in a manner analogous to natural hazards, then the predictions should be probabilistic in nature. The model described in this article is a prototype model that used agent-based modeling (ABM) to analyze terrorist attacks. The basic approach in this article of using ABM to model human-induced hazards has been preliminarily validated in the sense that the attack magnitudes seem to be power-law distributed and attacks occur mostly in regions where high levels of wealth pass through, such as transit routes and markets. The model developed in this study indicates that ABM is a viable approach to modeling socioeconomic-based infrastructure systems for engineering design to deal with human-induced hazards. © 2010 Society for Risk Analysis.
Functional form diagnostics for Cox's proportional hazards model.
León, Larry F; Tsai, Chih-Ling
2004-03-01
We propose a new type of residual and an easily computed functional form test for the Cox proportional hazards model. The proposed test is a modification of the omnibus test for testing the overall fit of a parametric regression model, developed by Stute, González Manteiga, and Presedo Quindimil (1998, Journal of the American Statistical Association93, 141-149), and is based on what we call censoring consistent residuals. In addition, we develop residual plots that can be used to identify the correct functional forms of covariates. We compare our test with the functional form test of Lin, Wei, and Ying (1993, Biometrika80, 557-572) in a simulation study. The practical application of the proposed residuals and functional form test is illustrated using both a simulated data set and a real data set.
Seismic hazard in the Intermountain West
Haller, Kathleen; Moschetti, Morgan P.; Mueller, Charles; Rezaeian, Sanaz; Petersen, Mark D.; Zeng, Yuehua
2015-01-01
The 2014 national seismic-hazard model for the conterminous United States incorporates new scientific results and important model adjustments. The current model includes updates to the historical catalog, which is spatially smoothed using both fixed-length and adaptive-length smoothing kernels. Fault-source characterization improved by adding faults, revising rates of activity, and incorporating new results from combined inversions of geologic and geodetic data. The update also includes a new suite of published ground motion models. Changes in probabilistic ground motion are generally less than 10% in most of the Intermountain West compared to the prior assessment, and ground-motion hazard in four Intermountain West cities illustrates the range and magnitude of change in the region. Seismic hazard at reference sites in Boise and Reno increased as much as 10%, whereas hazard in Salt Lake City decreased 5–6%. The largest change was in Las Vegas, where hazard increased 32–35%.
Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling
Directory of Open Access Journals (Sweden)
G. Delmonaco
2003-01-01
Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering
Sogutmaz Ozdemir, Bahar; Budak, Hikmet
2018-01-01
Brachypodium distachyon has recently emerged as a model plant species for the grass family (Poaceae) that includes major cereal crops and forage grasses. One of the important traits of a model species is its capacity to be transformed and ease of growing both in tissue culture and in greenhouse conditions. Hence, plant transformation technology is crucial for improvements in agricultural studies, both for the study of new genes and in the production of new transgenic plant species. In this chapter, we review an efficient tissue culture and two different transformation systems for Brachypodium using most commonly preferred gene transfer techniques in plant species, microprojectile bombardment method (biolistics) and Agrobacterium-mediated transformation.In plant transformation studies, frequently used explant materials are immature embryos due to their higher transformation efficiencies and regeneration capacity. However, mature embryos are available throughout the year in contrast to immature embryos. We explain a tissue culture protocol for Brachypodium using mature embryos with the selected inbred lines from our collection. Embryogenic calluses obtained from mature embryos are used to transform Brachypodium with both plant transformation techniques that are revised according to previously studied protocols applied in the grasses, such as applying vacuum infiltration, different wounding effects, modification in inoculation and cocultivation steps or optimization of bombardment parameters.
International Nuclear Information System (INIS)
Perdahcıoğlu, E.S.; Geijselaers, H.J.M.
2012-01-01
Mechanically induced martensitic transformation and the associated transformation plasticity phenomena in austenitic stainless steels are studied. The mechanisms responsible for the transformation are investigated and put into perspective based on experimental evidence. The stress and strain partitioning into the austenite and martensite phases are formulated using a mean-field homogenization approach. At this intermediate length-scale the average stress in the austenite phase is computed and utilized to compute the mechanical driving force resolved in the material. The amount of transformation and the transformation plasticity is derived as a function of the driving force. The mechanical response of the material is obtained by combining the homogenization and the transformation models. The model is verified by mechanical tests under biaxial loading conditions during which different transformation rates are observed. As a final verification of the model, a bending test is used which manifests the stress-state dependency of the transformation.
Asset transformation and the challenges to servitize a utility business model
International Nuclear Information System (INIS)
Helms, Thorsten
2016-01-01
The traditional energy utility business model is under pressure, and energy services are expected to play an important role for the energy transition. Experts and scholars argue that utilities need to innovate their business models, and transform from commodity suppliers to service providers. The transition from a product-oriented, capital-intensive business model based on tangible assets, towards a service-oriented, expense-intensive business model based on intangible assets may present great managerial and organizational challenges. Little research exists about such transitions for capital-intensive commodity providers, and particularly energy utilities, where the challenges to servitize are expected to be greatest. This qualitative paper explores the barriers to servitization within selected Swiss and German utility companies through a series of interviews with utility managers. One of them is ‘asset transformation’, the shift from tangible to intangible assets as major input factor for the value proposition, which is proposed as a driver for the complexity of business model transitions. Managers need to carefully manage those challenges, and find ways to operate both new service and established utility business models aside. Policy makers can support the transition of utilities through more favorable regulatory frameworks for energy services, and by supporting the exchange of knowledge in the industry. - Highlights: •The paper analyses the expected transformation of utilities into service-providers. •Service and utility business models possess very different attributes. •The former is based on intangible, the latter on tangible assets. •The transformation into a service-provider is related with great challenges. •Asset transformation is proposed as a barrier for business model innovation.
International Nuclear Information System (INIS)
Boissonnade, A; Hossain, Q; Kimball, J
2000-01-01
Since the mid-l980's, assessment of the wind and tornado risks at the Department of Energy (DOE) high and moderate hazard facilities has been based on the straight wind/tornado hazard curves given in UCRL-53526 (Coats, 1985). These curves were developed using a methodology that utilized a model, developed by McDonald, for severe winds at sub-tornado wind speeds and a separate model, developed by Fujita, for tornado wind speeds. For DOE sites not covered in UCRL-53526, wind and tornado hazard assessments are based on the criteria outlined in DOE-STD-1023-95 (DOE, 1996), utilizing the methodology in UCRL-53526; Subsequent to the publication of UCRL53526, in a study sponsored by the Nuclear Regulatory Commission (NRC), the Pacific Northwest Laboratory developed tornado wind hazard curves for the contiguous United States, NUREG/CR-4461 (Ramsdell, 1986). Because of the different modeling assumptions and underlying data used to develop the tornado wind information, the wind speeds at specified exceedance levels, at a given location, based on the methodology in UCRL-53526, are different than those based on the methodology in NUREG/CR-4461. In 1997, Lawrence Livermore National Laboratory (LLNL) was funded by the DOE to review the current methodologies for characterizing tornado wind hazards and to develop a state-of-the-art wind/tornado characterization methodology based on probabilistic hazard assessment techniques and current historical wind data. This report describes the process of developing the methodology and the database of relevant tornado information needed to implement the methodology. It also presents the tornado wind hazard curves obtained from the application of the method to DOE sites throughout the contiguous United States
Scott, Kevin M.; Macias, Jose Luis; Naranjo, Jose Antonio; Rodriguez, Sergio; McGeehin, John P.
2001-01-01
Communities in lowlands near volcanoes are vulnerable to significant volcanic flow hazards in addition to those associated directly with eruptions. The largest such risk is from debris flows beginning as volcanic landslides, with the potential to travel over 100 kilometers. Stratovolcanic edifices commonly are hydrothermal aquifers composed of unstable, altered rock forming steep slopes at high altitudes, and the terrain surrounding them is commonly mantled by readily mobilized, weathered airfall and ashflow deposits. We propose that volcano hazard assessments integrate the potential for unanticipated debris flows with, at active volcanoes, the greater but more predictable potential of magmatically triggered flows. This proposal reinforces the already powerful arguments for minimizing populations in potential flow pathways below both active and selected inactive volcanoes. It also addresses the potential for volcano flank collapse to occur with instability early in a magmatic episode, as well as the 'false-alarm problem'-the difficulty in evacuating the potential paths of these large mobile flows. Debris flows that transform from volcanic landslides, characterized by cohesive (muddy) deposits, create risk comparable to that of their syneruptive counterparts of snow and ice-melt origin, which yield noncohesive (granular) deposits, because: (1) Volcano collapses and the failures of airfall- and ashflow-mantled slopes commonly yield highly mobile debris flows as well as debris avalanches with limited runout potential. Runout potential of debris flows may increase several fold as their volumes enlarge beyond volcanoes through bulking (entrainment) of sediment. Through this mechanism, the runouts of even relatively small collapses at Cascade Range volcanoes, in the range of 0.1 to 0.2 cubic kilometers, can extend to populated lowlands. (2) Collapse is caused by a variety of triggers: tectonic and volcanic earthquakes, gravitational failure, hydrovolcanism, and
Transforming Cobol Legacy Software to a Generic Imperative Model
National Research Council Canada - National Science Library
Moraes, DinaL
1999-01-01
.... This research develops a transformation system to convert COBOL code into a generic imperative model, recapturing the initial design and deciphering the requirements implemented by the legacy code...
Probabilistic seismic hazard assessment. Gentilly 2
International Nuclear Information System (INIS)
1996-03-01
Results of this probabilistic seismic hazard assessment were determined using a suite of conservative assumptions. The intent of this study was to perform a limited hazard assessment that incorporated a range of technically defensible input parameters. To best achieve this goal, input selected for the hazard assessment tended to be conservative with respect to selection of attenuation modes, and seismicity parameters. Seismic hazard estimates at Gentilly 2 were most affected by selection of the attenuation model. Alternative definitions of seismic source zones had a relatively small impact on seismic hazard. A St. Lawrence Rift model including a maximum magnitude of 7.2 m b in the zone containing the site had little effect on the hazard estimate relative to other seismic source zonation models. Mean annual probabilities of exceeding the design peak ground acceleration, and the design response spectrum for the Gentilly 2 site were computed to lie in the range of 0.001 to 0.0001. This hazard result falls well within the range determined to be acceptable for nuclear reactor sites located throughout the eastern United States. (author) 34 refs., 6 tabs., 28 figs
Culture models of human mammary epithelial cell transformation
Energy Technology Data Exchange (ETDEWEB)
Stampfer, Martha R.; Yaswen, Paul
2000-11-10
Human pre-malignant breast diseases, particularly ductal carcinoma in situ (DCIS)3 already display several of the aberrant phenotypes found in primary breast cancers, including chromosomal abnormalities, telomerase activity, inactivation of the p53 gene and overexpression of some oncogenes. Efforts to model early breast carcinogenesis in human cell cultures have largely involved studies in vitro transformation of normal finite lifespan human mammary epithelial cells (HMEC) to immortality and malignancy. We present a model of HMEC immortal transformation consistent with the know in vivo data. This model includes a recently described, presumably epigenetic process, termed conversion, which occurs in cells that have overcome stringent replicative senescence and are thus able to maintain proliferation with critically short telomeres. The conversion process involves reactivation of telomerase activity, and acquisition of good uniform growth in the absence and presence of TFGB. We propose th at overcoming the proliferative constraints set by senescence, and undergoing conversion, represent key rate-limiting steps in human breast carcinogenesis, and occur during early stage breast cancer progression.
A Professionalism Curricular Model to Promote Transformative Learning Among Residents.
Foshee, Cecile M; Mehdi, Ali; Bierer, S Beth; Traboulsi, Elias I; Isaacson, J Harry; Spencer, Abby; Calabrese, Cassandra; Burkey, Brian B
2017-06-01
Using the frameworks of transformational learning and situated learning theory, we developed a technology-enhanced professionalism curricular model to build a learning community aimed at promoting residents' self-reflection and self-awareness. The RAPR model had 4 components: (1) R ecognize : elicit awareness; (2) A ppreciate : question assumptions and take multiple perspectives; (3) P ractice : try new/changed perspectives; and (4) R eflect : articulate implications of transformed views on future actions. The authors explored the acceptability and practicality of the RAPR model in teaching professionalism in a residency setting, including how residents and faculty perceive the model, how well residents carry out the curricular activities, and whether these activities support transformational learning. A convenience sample of 52 postgraduate years 1 through 3 internal medicine residents participated in the 10-hour curriculum over 4 weeks. A constructivist approach guided the thematic analysis of residents' written reflections, which were a required curricular task. A total of 94% (49 of 52) of residents participated in 2 implementation periods (January and March 2015). Findings suggested that RAPR has the potential to foster professionalism transformation in 3 domains: (1) attitudinal, with participants reporting they viewed professionalism in a more positive light and felt more empathetic toward patients; (2) behavioral, with residents indicating their ability to listen to patients increased; and (3) cognitive, with residents indicating the discussions improved their ability to reflect, and this helped them create meaning from experiences. Our findings suggest that RAPR offers an acceptable and practical strategy to teach professionalism to residents.
Sato Processes in Default Modeling
DEFF Research Database (Denmark)
Kokholm, Thomas; Nicolato, Elisa
-change of a homogeneous Levy process. While the processes in these two classes share the same average behavior over time, the associated intensities exhibit very different properties. Concrete specifications are calibrated to data on the single names included in the iTraxx Europe index. The performances are compared......In reduced form default models, the instantaneous default intensity is classically the modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature has shown a tendency towards...... specifying the cumulative hazard process directly. Within this framework we present a new model class where cumulative hazards are described by self-similar additive processes, also known as Sato processes. Furthermore we also analyze specifications obtained via a simple deterministic time...
Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions
Morelli, Eugene A.
2013-01-01
A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.
He, P L; Zhao, C X; Dong, Q Y; Hao, S B; Xu, P; Zhang, J; Li, J G
2018-01-20
Objective: To evaluate the occupational health risk of decorative coating manufacturing enterprises and to explore the applicability of occupational hazard risk index model in the health risk assessment, so as to provide basis for the health management of enterprises. Methods: A decorative coating manufacturing enterprise in Hebei Province was chosen as research object, following the types of occupational hazards and contact patterns, the occupational hazard risk index model was used to evaluate occupational health risk factors of occupational hazards in the key positions of the decorative coating manufacturing enterprise, and measured with workplace test results and occupational health examination. Results: The positions of oily painters, water-borne painters, filling workers and packers who contacted noise were moderate harm. And positions of color workers who contacted chromic acid salts, oily painters who contacted butyl acetate were mild harm. Other positions were harmless. The abnormal rate of contacting noise in physical examination results was 6.25%, and the abnormality was not checked by other risk factors. Conclusion: The occupational hazard risk index model can be used in the occupational health risk assessment of decorative coating manufacturing enterprises, and noise was the key harzard among occupational harzards in this enterprise.
International Nuclear Information System (INIS)
Rebour, V.; Georgescu, G.; Leteinturier, D.; Raimond, E.; La Rovere, S.; Bernadara, P.; Vasseur, D.; Brinkman, H.; Groudev, P.; Ivanov, I.; Turschmann, M.; Sperbeck, S.; Potempski, S.; Hirata, K.; Kumar, Manorma
2016-01-01
This report provides a review of existing practices to model and implement external flooding hazards in existing level 1 PSA. The objective is to identify good practices on the modelling of initiating events (internal and external hazards) with a perspective of development of extended PSA and implementation of external events modelling in extended L1 PSA, its limitations/difficulties as far as possible. The views presented in this report are based on the ASAMPSA-E partners' experience and available publications. The report includes discussions on the following issues: - how to structure a L1 PSA for external flooding events, - information needed from geosciences in terms of hazards modelling and to build relevant modelling for PSA, - how to define and model the impact of each flooding event on SSCs with distinction between the flooding protective structures and devices and the effect of protection failures on other SSCs, - how to identify and model the common cause failures in one reactor or between several reactors, - how to apply HRA methodology for external flooding events, - how to credit additional emergency response (post-Fukushima measures like mobile equipment), - how to address the specific issues of L2 PSA, - how to perform and present risk quantification. (authors)
Managing Green Business Model Transformations
Sommer, Axel
2012-01-01
Environmental sustainability creates both tremendous business opportunities and formidable threats to established companies across virtually all industry sectors. Yet many companies tackle the issue in a superficial or passive way through increased environmental reporting, the use of “greenspeak” in their corporate communication activities or isolated efforts to create green products or reduce pollution. In contrast, there are a small but increasing number of firms that employ a holistic approach to sustainability and consider fundamental changes to their existing business models. By ignoring the opportunities of Green Business Model Transformations, companies exclude themselves from a large variety of potential means to create economic value. In addition to ordinary product and process innovations, they can change “the rules of the game” within an industry towards environmental sustainability. This can facilitate the commercialisation of new green products that would not be competitive otherwise targ...
Towards an automatic model transformation mechanism from UML state machines to DEVS models
Directory of Open Access Journals (Sweden)
Ariel González
2015-08-01
Full Text Available The development of complex event-driven systems requires studies and analysis prior to deployment with the goal of detecting unwanted behavior. UML is a language widely used by the software engineering community for modeling these systems through state machines, among other mechanisms. Currently, these models do not have appropriate execution and simulation tools to analyze the real behavior of systems. Existing tools do not provide appropriate libraries (sampling from a probability distribution, plotting, etc. both to build and to analyze models. Modeling and simulation for design and prototyping of systems are widely used techniques to predict, investigate and compare the performance of systems. In particular, the Discrete Event System Specification (DEVS formalism separates the modeling and simulation; there are several tools available on the market that run and collect information from DEVS models. This paper proposes a model transformation mechanism from UML state machines to DEVS models in the Model-Driven Development (MDD context, through the declarative QVT Relations language, in order to perform simulations using tools, such as PowerDEVS. A mechanism to validate the transformation is proposed. Moreover, examples of application to analyze the behavior of an automatic banking machine and a control system of an elevator are presented.
2016-01-05
Computer-aided transformation of PDE models: languages, representations, and a calculus of operations A domain-specific embedded language called...languages, representations, and a calculus of operations Report Title A domain-specific embedded language called ibvp was developed to model initial...Computer-aided transformation of PDE models: languages, representations, and a calculus of operations 1 Vision and background Physical and engineered systems
Independent screening for single-index hazard rate models with ultrahigh dimensional features
DEFF Research Database (Denmark)
Gorst-Rasmussen, Anders; Scheike, Thomas
2013-01-01
can be viewed as the natural survival equivalent of correlation screening. We state conditions under which the method admits the sure screening property within a class of single-index hazard rate models with ultrahigh dimensional features and describe the generally detrimental effect of censoring...
Backlund transformations as canonical transformations
International Nuclear Information System (INIS)
Villani, A.; Zimerman, A.H.
1977-01-01
Toda and Wadati as well as Kodama and Wadati have shown that the Backlund transformations, for the exponential lattice equation, sine-Gordon equation, K-dV (Korteweg de Vries) equation and modifies K-dV equation, are canonical transformation. It is shown that the Backlund transformation for the Boussinesq equation, for a generalized K-dV equation, for a model equation for shallow water waves and for the nonlinear Schroedinger equation are also canonical transformations [pt
Modelling a single phase voltage controlled rectifier using Laplace transforms
Kraft, L. Alan; Kankam, M. David
1992-01-01
The development of a 20 kHz, AC power system by NASA for large space projects has spurred a need to develop models for the equipment which will be used on these single phase systems. To date, models for the AC source (i.e., inverters) have been developed. It is the intent of this paper to develop a method to model the single phase voltage controlled rectifiers which will be attached to the AC power grid as an interface for connected loads. A modified version of EPRI's HARMFLO program is used as the shell for these models. The results obtained from the model developed in this paper are quite adequate for the analysis of problems such as voltage resonance. The unique technique presented in this paper uses the Laplace transforms to determine the harmonic content of the load current of the rectifier rather than a curve fitting technique. Laplace transforms yield the coefficient of the differential equations which model the line current to the rectifier directly.
Austin, Peter C
2018-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.
Seismic hazard in the eastern United States
Mueller, Charles; Boyd, Oliver; Petersen, Mark D.; Moschetti, Morgan P.; Rezaeian, Sanaz; Shumway, Allison
2015-01-01
The U.S. Geological Survey seismic hazard maps for the central and eastern United States were updated in 2014. We analyze results and changes for the eastern part of the region. Ratio maps are presented, along with tables of ground motions and deaggregations for selected cities. The Charleston fault model was revised, and a new fault source for Charlevoix was added. Background seismicity sources utilized an updated catalog, revised completeness and recurrence models, and a new adaptive smoothing procedure. Maximum-magnitude models and ground motion models were also updated. Broad, regional hazard reductions of 5%–20% are mostly attributed to new ground motion models with stronger near-source attenuation. The revised Charleston fault geometry redistributes local hazard, and the new Charlevoix source increases hazard in northern New England. Strong increases in mid- to high-frequency hazard at some locations—for example, southern New Hampshire, central Virginia, and eastern Tennessee—are attributed to updated catalogs and/or smoothing.
An Uncertain Wage Contract Model with Adverse Selection and Moral Hazard
Directory of Open Access Journals (Sweden)
Xiulan Wang
2014-01-01
it can be characterized as an uncertain variable. Moreover, the employee's effort is unobservable to the employer, and the employee can select her effort level to maximize her utility. Thus, an uncertain wage contract model with adverse selection and moral hazard is established to maximize the employer's expected profit. And the model analysis mainly focuses on the equivalent form of the proposed wage contract model and the optimal solution to this form. The optimal solution indicates that both the employee's effort level and the wage increase with the employee's ability. Lastly, a numerical example is given to illustrate the effectiveness of the proposed model.
Directory of Open Access Journals (Sweden)
S. I. Bartsev
2015-06-01
Full Text Available A possible method for experimental determination of parameters of the previously proposed continual mathematical model of soil organic matter transformation is theoretically considered in this paper. The previously proposed by the authors continual model of soil organic matter transformation, based on using the rate of matter transformation as a continual scale of its recalcitrance, describes the transformation process phenomenologically without going into detail of microbiological mechanisms of transformation. Thereby simplicity of the model is achieved. The model is represented in form of one differential equation in firstorder partial derivatives, which has an analytical solution in elementary functions. The model equation contains a small number of empirical parameters which generally characterize environmental conditions where the matter transformation process occurs and initial properties of the plant litter. Given the values of these parameters, it is possible to calculate dynamics of soil organic matter stocks and its distribution over transformation rate. In the present study, possible approaches for determination of the model parameters are considered and a simple method of their experimental measurement is proposed. An experiment of an incubation of chemically homogeneous samples in soil and multiple sequential measurement of the sample mass loss with time is proposed. An equation of time dynamics of mass loss of incubated homogeneous sample is derived from the basic assumption of the presented soil organic matter transformation model. Thus, fitting by the least squares method the parameters of sample mass loss curve calculated according the proposed mass loss dynamics equation allows to determine the parameters of the general equation of soil organic transformation model.
About the use of rank transformation in sensitivity analysis of model output
International Nuclear Information System (INIS)
Saltelli, Andrea; Sobol', Ilya M
1995-01-01
Rank transformations are frequently employed in numerical experiments involving a computational model, especially in the context of sensitivity and uncertainty analyses. Response surface replacement and parameter screening are tasks which may benefit from a rank transformation. Ranks can cope with nonlinear (albeit monotonic) input-output distributions, allowing the use of linear regression techniques. Rank transformed statistics are more robust, and provide a useful solution in the presence of long tailed input and output distributions. As is known to practitioners, care must be employed when interpreting the results of such analyses, as any conclusion drawn using ranks does not translate easily to the original model. In the present note an heuristic approach is taken, to explore, by way of practical examples, the effect of a rank transformation on the outcome of a sensitivity analysis. An attempt is made to identify trends, and to correlate these effects to a model taxonomy. Employing sensitivity indices, whereby the total variance of the model output is decomposed into a sum of terms of increasing dimensionality, we show that the main effect of the rank transformation is to increase the relative weight of the first order terms (the 'main effects'), at the expense of the 'interactions' and 'higher order interactions'. As a result the influence of those parameters which influence the output mostly by way of interactions may be overlooked in an analysis based on the ranks. This difficulty increases with the dimensionality of the problem, and may lead to the failure of a rank based sensitivity analysis. We suggest that the models can be ranked, with respect to the complexity of their input-output relationship, by mean of an 'Association' index I y . I y may complement the usual model coefficient of determination R y 2 as a measure of model complexity for the purpose of uncertainty and sensitivity analysis
Protein structure analysis using the resonant recognition model and wavelet transforms
International Nuclear Information System (INIS)
Fang, Q.; Cosic, I.
1998-01-01
An approach based on the resonant recognition model and the discrete wavelet transform is introduced here for characterising proteins' biological function. The protein sequence is converted into a numerical series by assigning the electron-ion interaction potential to each amino acid from N-terminal to C-terminal. A set of peaks is found after performing a wavelet transform onto a numerical series representing a group of homologous proteins. These peaks are related to protein structural and functional properties and named characteristic vector of that protein group. Further more, the amino acids contributing mostly to a protein's biological functions, the so-called 'hot spots' amino acids, are predicted by the continuous wavelet transform. It is found that the hot spots are clustered around the protein's cleft structure. The wavelets approach provides a novel methods for amino acid sequence analysis as well as an expansion for the newly established macromolecular interaction model: the resonant recognition model. Copyright (1998) Australasian Physical and Engineering Sciences in Medicine
An estimating equation for parametric shared frailty models with marginal additive hazards
DEFF Research Database (Denmark)
Pipper, Christian Bressen; Martinussen, Torben
2004-01-01
Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In some contexts, however, it may be more reasonable...
High frequency modeling of power transformers. Stresses and diagnostics
Energy Technology Data Exchange (ETDEWEB)
Bjerkan, Eilert
2005-05-15
In this thesis a reliable, versatile and rigorous method for high frequency power transformer modeling is searched and established. The purpose is to apply this model to sensitivity analysis of FRA (Frequency Response Analysis) which is a quite new diagnostic method for assessing the mechanical integrity of power transformer windings on-site. The method should be versatile in terms of being able to estimate internal and external over voltages and resonances. Another important aspect is that the method chosen is suitable for real transformer geometries. In order to verify the suitability of the model for real transformers, a specific test-object is used. This is a 20MVA transformer, and details are given in chapter 1.4. The high frequency power transformer model is established from geometrical and constructional information from the manufacturer, together with available material characteristics. All circuit parameters in the lumped circuit representation are calculated based on these data. No empirical modifications need to be performed. Comparison shows capability of reasonable accuracy in the range from 10 khz to 1 MHz utilizing a disc-to-disc representation. A compromise between accuracy of model due to discretization and complexity of the model in a turn-to-turn representation is inevitable. The importance of the iron core is emphasized through a comparison of representations with/without the core included. Frequency-dependent phenomena are accurately represented using an isotropic equivalent for windings and core, even with a coarse mesh for the FEM-model. This is achieved through a frequency-dependent complex permeability representation of the materials. This permeability is deduced from an analytical solution of the frequency-dependent magnetic field inside the conductors and the core. The importance of dielectric losses in a transformer model is also assessed. Since published data on the high frequency properties of press board are limited, some initial
Design and Modeling of an Integrated Micro-Transformer in a Flyback Converter
Directory of Open Access Journals (Sweden)
M. Derkaoui
2013-11-01
Full Text Available This paper presents the design and modeling of a square micro-transformer for its integration in a flyback converter. From the specifications of the switching power supply, we determined the geometric parameters of this micro-transformer. The Ï€-electrical model of this micro-transformer highlights all parasitic effects generated by stacking of different material layers and permits to calculate the technological parameters by using the S-parameters. A good dimensioning of the geometrical parameters reduces efficiently the energy losses in the micro-transformer and permits to reach the desirable value of the converter output voltage. We have also simulated the electromagnetic effects with the help of the software FEMLAB3.1 in two cases. The first case, without ferromagnetic core, the second case with ferromagnetic core, in order to choose the micro-transformer that has better electromagnetic compatibility with the vicinity components. To validate dimensioning of the geometrical and technological parameters, we have simulated with the help of the software PSIM6.0, the equivalent electrical circuit of the converter containing the electrical circuit of the dimensioned planar micro-transformer.
International Nuclear Information System (INIS)
Stupkiewicz, S.; Petryk, H.
2006-01-01
A micromechanical model of stress-induced martensitic transformation in single crystals of shape memory alloys is developed. This model is a finite-strain counterpart to the approach presented recently in the small-strain setting [S. Stupkiewicz, H. Petryk, J. Mech. Phys. Solids 50 (2002) 2303-2331]. The stress-induced transformation is assumed to proceed by the formation and growth of parallel martensite plates within the austenite matrix. Propagation of phase transformation fronts is governed by a rate-independent thermodynamic criterion with a threshold value for the thermodynamic driving force, including in this way the intrinsic dissipation due to phase transition. This criterion selects the initial microstructure at the onset of transformation and governs the evolution of the laminated microstructure at the macroscopic level. A multiplicative decomposition of the deformation gradient into elastic and transformation parts is assumed, with full account for the elastic anisotropy of the phases. The pseudoelastic behavior of Cu-Zn-Al single crystal in tension and compression is studied as an application of the model
Linear non-threshold (LNT) radiation hazards model and its evaluation
International Nuclear Information System (INIS)
Min Rui
2011-01-01
In order to introduce linear non-threshold (LNT) model used in study on the dose effect of radiation hazards and to evaluate its application, the analysis of comprehensive literatures was made. The results show that LNT model is more suitable to describe the biological effects in accuracy for high dose than that for low dose. Repairable-conditionally repairable model of cell radiation effects can be well taken into account on cell survival curve in the all conditions of high, medium and low absorbed dose range. There are still many uncertainties in assessment model of effective dose of internal radiation based on the LNT assumptions and individual mean organ equivalent, and it is necessary to establish gender-specific voxel human model, taking gender differences into account. From above, the advantages and disadvantages of various models coexist. Before the setting of the new theory and new model, LNT model is still the most scientific attitude. (author)
Constitutive modeling of multiphase materials including phase transformations
Perdahcioglu, Emin Semih; Geijselaers, Hubertus J.M.; Khan, A.S.; Meredith, C; Farrokh, B
2011-01-01
A constitutive model is developed for materials involving two or more different phases in their microstructure such as DP (Dual Phase) or TRIP (TRansformation Induced Plasticity) steels. Homogenization of the response of the phases is achieved by the Mean-Field method. One of the phases in TRIP
Garion, C
2001-01-01
The 300-series stainless steels are metastable austenitic alloys: martensitic transformation occurs at low temperatures and/or when plastic strain fields develop in the structures. The transformation influences the mechanical properties of the material. The present note aims at proposing a set of constitutive equations describing the plastic strain induced martensitic transformation in the stainless steels at cryogenic temperatures. The constitutive modelling shall create a bridge between the material sciences and the structural analysis. For the structures developing and accumulating plastic deformations at sub-zero temperatures, it is of primary importance to be able to predict the intensity of martensitic transformation and its effect on the material properties. In particular, the constitutive model has been applied to predict the behaviour of the components of the LHC interconnections, the so-called bellows expansion joints (the LHC mechanical compensation system).
Fault diagnostics in power transformer model winding for different alpha values
Directory of Open Access Journals (Sweden)
G.H. Kusumadevi
2015-09-01
Full Text Available Transient overvoltages appearing at line terminal of power transformer HV windings can cause failure of winding insulation. The failure can be from winding to ground or between turns or sections of winding. In most of the cases, failure from winding to ground can be detected by changes in the wave shape of surge voltage appearing at line terminal. However, detection of insulation failure between turns may be difficult due to intricacies involved in identifications of faults. In this paper, simulation investigations carried out on a power transformer model winding for identifying faults between turns of winding has been reported. The power transformer HV winding has been represented by 8 sections, 16 sections and 24 sections. Neutral current waveform has been analyzed for same model winding represented by different number of sections. The values of α (‘α’ value is the square root of total ground capacitance to total series capacitance of winding considered for windings are 5, 10 and 20. Standard lightning impulse voltage (1.2/50 μs wave shape have been considered for analysis. Computer simulations have been carried out using software PSPICE version 10.0. Neutral current and frequency response analysis methods have been used for identification of faults within sections of transformer model winding.
Tornado hazard model with the variation effects of tornado intensity along the path length
International Nuclear Information System (INIS)
Hirakuchi, Hiromaru; Nohara, Daisuke; Sugimoto, Soichiro; Eguchi, Yuzuru; Hattori, Yasuo
2015-01-01
Most of Japanese tornados have been reported near the coast line, where all of Japanese nuclear power plants are located. It is necessary for Japanese electric power companies to assess tornado risks on the plants according to a new regulation in 2013. The new regulatory guide exemplifies a tornado hazard model, which cannot consider the variation of tornado intensity along the path length and consequently produces conservative risk estimates. The guide also recommends the long narrow strip area along the coast line with the width of 5-10 km as a region of interest, although the model tends to estimate inadequate wind speeds due to the limit of application. The purpose of this study is to propose a new tornado hazard model which can be apply to the long narrow strip area. The new model can also consider the variation of tornado intensity along the path length and across the path width. (author)
Preparing a seismic hazard model for Switzerland: the view from PEGASOS Expert Group 3 (EG1c)
Energy Technology Data Exchange (ETDEWEB)
Musson, R. M. W. [British Geological Survey, West Mains Road, Edinburgh, EH9 3LA (United Kingdom); Sellami, S. [Swiss Seismological Service, ETH-Hoenggerberg, Zuerich (Switzerland); Bruestle, W. [Regierungspraesidium Freiburg, Abt. 9: Landesamt fuer Geologie, Rohstoffe und Bergbau, Ref. 98: Landeserdbebendienst, Freiburg im Breisgau (Germany)
2009-05-15
The seismic hazard model used in the PEGASOS project for assessing earth-quake hazard at four NPP sites was a composite of four sub-models, each produced by a team of three experts. In this paper, one of these models is described in detail by the authors. A criticism sometimes levelled at probabilistic seismic hazard studies is that the process by which seismic source zones are arrived at is obscure, subjective and inconsistent. Here, we attempt to recount the stages by which the model evolved, and the decisions made along the way. In particular, a macro-to-micro approach was used, in which three main stages can be described. The first was the characterisation of the overall kinematic model, the 'big picture' of regional seismo-genesis. Secondly, this was refined to a more detailed seismotectonic model. Lastly, this was used as the basis of individual sources, for which parameters can be assessed. Some basic questions had also to be answered about aspects of the approach to modelling to be used: for instance, is spatial smoothing an appropriate tool to apply? Should individual fault sources be modelled in an intra-plate environment? Also, the extent to which alternative modelling decisions should be expressed in a logic tree structure has to be considered. (author)
Preparing a seismic hazard model for Switzerland: the view from PEGASOS Expert Group 3 (EG1c)
International Nuclear Information System (INIS)
Musson, R. M. W.; Sellami, S.; Bruestle, W.
2009-01-01
The seismic hazard model used in the PEGASOS project for assessing earth-quake hazard at four NPP sites was a composite of four sub-models, each produced by a team of three experts. In this paper, one of these models is described in detail by the authors. A criticism sometimes levelled at probabilistic seismic hazard studies is that the process by which seismic source zones are arrived at is obscure, subjective and inconsistent. Here, we attempt to recount the stages by which the model evolved, and the decisions made along the way. In particular, a macro-to-micro approach was used, in which three main stages can be described. The first was the characterisation of the overall kinematic model, the 'big picture' of regional seismo-genesis. Secondly, this was refined to a more detailed seismotectonic model. Lastly, this was used as the basis of individual sources, for which parameters can be assessed. Some basic questions had also to be answered about aspects of the approach to modelling to be used: for instance, is spatial smoothing an appropriate tool to apply? Should individual fault sources be modelled in an intra-plate environment? Also, the extent to which alternative modelling decisions should be expressed in a logic tree structure has to be considered. (author)
International Nuclear Information System (INIS)
Harper, Bryan; Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan; Tang, Kaizhi; Heredia-Langner, Alejandro; Lins, Roberto; Harper, Stacey
2015-01-01
The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure–toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure–activity relationships
Energy Technology Data Exchange (ETDEWEB)
Harper, Bryan [Oregon State University (United States); Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan [Pacific Northwest National Laboratory (United States); Tang, Kaizhi [Intelligent Automation, Inc. (United States); Heredia-Langner, Alejandro [Pacific Northwest National Laboratory (United States); Lins, Roberto [CPqAM, Oswaldo Cruz Foundation, FIOCRUZ-PE (Brazil); Harper, Stacey, E-mail: stacey.harper@oregonstate.edu [Oregon State University (United States)
2015-06-15
The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure–toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure–activity relationships.
International Nuclear Information System (INIS)
Song, Shaojie; Liu, Feng
2016-01-01
Considering a spherical misfitting precipitate growing into a finite elastic-perfectly plastic supersaturated matrix, a kinetic modeling for such solid-state partitioning phase transformation is presented, where the interactions of interface migration, solute diffusion and misfit accommodation are analyzed. The linkage between interface migration and solute diffusion proceeds through interfacial composition and interface velocity; their effects on misfit accommodation are mainly manifested in an effective transformation strain, which depends on instantaneous composition field and precipitate size. Taking γ to α transformation of a binary Fe-0.5 at.% C alloy under both isothermal and continuous cooling conditions as examples, the effects of misfit accommodation on the coupling interface migration and solute diffusion are well evaluated and discussed. For the isothermal transformation, a counterbalancing influence between mechanical and chemical driving forces is found so that the mixed-mode transformation kinetics is not sensitive with respect to the elastic–plastic accommodation of the effective misfit strain. Different from the isothermal process, during the continuous cooling condition, the effects of misfit accommodation on the kinetics of solid-state partitioning phase transformation are mainly manifested in the great decrease of the transformation starting temperature and the thermodynamic equilibrium composition. The present kinetic modeling was applied to predict the experimentally measured γ/α transformation of Fe-0.47 at.% C alloy conducted with a cooling rate of 10 K min −1 and a good agreement was achieved.
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
International Nuclear Information System (INIS)
Sugino, Hideharu; Onizawa, Kunio; Suzuki, Masahide
2005-09-01
To establish the reliability evaluation method for aged structural component, we developed a probabilistic seismic hazard evaluation code SHEAT-FM (Seismic Hazard Evaluation for Assessing the Threat to a facility site - Fault Model) using a seismic motion prediction method based on fault model. In order to improve the seismic hazard evaluation, this code takes the latest knowledge in the field of earthquake engineering into account. For example, the code involves a group delay time of observed records and an update process model of active fault. This report describes the user's guide of SHEAT-FM, including the outline of the seismic hazard evaluation, specification of input data, sample problem for a model site, system information and execution method. (author)
Transforming Systems Engineering through Model Centric Engineering
2017-08-08
Contract No. HQ0034-13-D-0004 Report No. SERC-2017-TR-110 Date: August 8, 2017 Transforming Systems Engineering through Model-Centric... Engineering Technical Report SERC-2017-TR-110 Update: August 8, 2017 Principal Investigator: Mark Blackburn, Stevens Institute of Technology Co...Evangelista Sponsor: U.S. Army Armament Research, Development and Engineering Center (ARDEC), Office of the Deputy Assistant Secretary of Defense for
Retail business model transformation in multichannel environment
Chapagain, B. (Bimala)
2015-01-01
Abstract With the advent of internet and e-commerce, the way of carrying out business and transactions has changed to a great extent. Consumers are continuously changing the way they do shopping and this has forced retail business to transform their traditional brick and mortar into adopting multi-channel business models. Retailing is one of the most dynamic and competitive areas of business organization. Effective marketin...
Parameters of Models of Structural Transformations in Alloy Steel Under Welding Thermal Cycle
Kurkin, A. S.; Makarov, E. L.; Kurkin, A. B.; Rubtsov, D. E.; Rubtsov, M. E.
2017-05-01
A mathematical model of structural transformations in an alloy steel under the thermal cycle of multipass welding is suggested for computer implementation. The minimum necessary set of parameters for describing the transformations under heating and cooling is determined. Ferritic-pearlitic, bainitic and martensitic transformations under cooling of a steel are considered. A method for deriving the necessary temperature and time parameters of the model from the chemical composition of the steel is described. Published data are used to derive regression models of the temperature ranges and parameters of transformation kinetics in alloy steels. It is shown that the disadvantages of the active visual methods of analysis of the final phase composition of steels are responsible for inaccuracy and mismatch of published data. The hardness of a specimen, which correlates with some other mechanical properties of the material, is chosen as the most objective and reproducible criterion of the final phase composition. The models developed are checked by a comparative analysis of computational results and experimental data on the hardness of 140 alloy steels after cooling at various rates.
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
Building resilience to weather-related hazards through better preparedness
Keller, Julia; Golding, Brian; Johnston, David; Ruti, Paolo
2017-04-01
Recent developments in weather forecasting have transformed our ability to predict weather-related hazards, while mobile communication is radically changing the way that people receive information. At the same time, vulnerability to weather-related hazards is growing through urban expansion, population growth and climate change. This talk will address issues facing the science community in responding to the Sendai Framework objective to "substantially increase the availability of and access to multi-hazard early warning systems" in the context of weather-related hazards. It will also provide an overview of activities and approaches developed in the World Meteorological Organisation's High Impact Weather (HIWeather) project. HIWeather has identified and is promoting research in key multi-disciplinary gaps in our knowledge, including in basic meteorology, risk prediction, communication and decision making, that affect our ability to provide effective warnings. The results will be pulled together in demonstration projects that will both showcase leading edge capability and build developing country capacity.
Numerical model of phase transformation of steel C80U during hardening
Directory of Open Access Journals (Sweden)
T. Domański
2007-12-01
Full Text Available The article concerns numerical modelling of the phase transformations in solid state hardening of tool steel C80U. The transformations were assumed: initial structure – austenite, austenite – perlite, bainite and austenite – martensite. Model for evaluation of fractions of phases and their kinetics based on continuous heating diagram (CHT and continuous cooling diagram (CCT. The dilatometric tests on the simulator of thermal cycles were performed. The results of dilatometric tests were compared with the results of the test numerical simulations. In this way the derived models for evaluating phase content and kinetics of transformations in heating and cooling processes were verified. The results of numerical simulations confirm correctness of the algorithm that were worked out. In the numerical example the simulated estimation of the phase fraction in the hardened axisimmetrical element was performed.
Eaton, A D; Zimmermann, C; Delaney, B; Hurley, B P
2017-08-01
An experimental platform employing human derived intestinal epithelial cell (IEC) line monolayers grown on permeable Transwell ® filters was previously investigated to differentiate between hazardous and innocuous proteins. This approach was effective at distinguishing these types of proteins and perturbation of monolayer integrity, particularly transepithelial electrical resistance (TEER), was the most sensitive indicator. In the current report, in vitro indicators of monolayer integrity, cytotoxicity, and inflammation were evaluated using primary (non-transformed) human polarized small intestinal epithelial barriers cultured on Transwell ® filters to compare effects of a hazardous protein (Clostridium difficile Toxin A [ToxA]) and an innocuous protein (bovine serum albumin [BSA]). ToxA exerted a reproducible decrease on barrier integrity at doses comparable to those producing effects observed from cell line-derived IEC monolayers, with TEER being the most sensitive indicator. In contrast, BSA, tested at concentrations substantially higher than ToxA, did not cause changes in any of the tested variables. These results demonstrate a similarity in response to certain proteins between cell line-derived polarized IEC models and a primary human polarized small intestinal epithelial barrier model, thereby reinforcing the potential usefulness of cell line-derived polarized IECs as a valid experimental platform to differentiate between hazardous and non-hazardous proteins. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events
Dinitz, Laura B.; Taketa, Richard A.
2013-01-01
This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.
Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals
DEFF Research Database (Denmark)
Li, Jianing; Scheike, Thomas; Zhang, Mei Jie
2015-01-01
Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter...... estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums...
Animation Strategies for Smooth Transformations Between Discrete Lods of 3d Building Models
Kada, Martin; Wichmann, Andreas; Filippovska, Yevgeniya; Hermes, Tobias
2016-06-01
The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD) of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.
Modeling biochemical transformation processes and information processing with Narrator.
Mandel, Johannes J; Fuss, Hendrik; Palfreyman, Niall M; Dubitzky, Werner
2007-03-27
Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Narrator is a flexible and intuitive systems biology tool. It is
An efficient visual saliency detection model based on Ripplet transform
Indian Academy of Sciences (India)
A Diana Andrushia
human visual attention models is still not well investigated. ... Ripplet transform; visual saliency model; Receiver Operating Characteristics (ROC); .... proposed method has the same resolution as that of an input ... regions are obtained, which are independent of their sizes. ..... impact than those far away from the attention.
Transforming Systems Engineering through Model-Centric Engineering
2018-02-28
Contract No. HQ0034-13-D-0004 Research Tasks: 48, 118, 141, 157, 170 Report No. SERC-2018-TR-103 Transforming Systems Engineering through...Model-Centric Engineering Technical Report SERC-2018-TR-103 February 28, 2018 Principal Investigator Dr. Mark Blackburn, Stevens Institute of...Systems Engineering Research Center This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the
LAV@HAZARD: a Web-GIS Framework for Real-Time Forecasting of Lava Flow Hazards
Del Negro, C.; Bilotta, G.; Cappello, A.; Ganci, G.; Herault, A.
2014-12-01
Crucial to lava flow hazard assessment is the development of tools for real-time prediction of flow paths, flow advance rates, and final flow lengths. Accurate prediction of flow paths and advance rates requires not only rapid assessment of eruption conditions (especially effusion rate) but also improved models of lava flow emplacement. Here we present the LAV@HAZARD web-GIS framework, which combines spaceborne remote sensing techniques and numerical simulations for real-time forecasting of lava flow hazards. By using satellite-derived discharge rates to drive a lava flow emplacement model, LAV@HAZARD allows timely definition of parameters and maps essential for hazard assessment, including the propagation time of lava flows and the maximum run-out distance. We take advantage of the flexibility of the HOTSAT thermal monitoring system to process satellite images coming from sensors with different spatial, temporal and spectral resolutions. HOTSAT was designed to ingest infrared satellite data acquired by the MODIS and SEVIRI sensors to output hot spot location, lava thermal flux and discharge rate. We use LAV@HAZARD to merge this output with the MAGFLOW physics-based model to simulate lava flow paths and to update, in a timely manner, flow simulations. Thus, any significant changes in lava discharge rate are included in the predictions. A significant benefit in terms of computational speed was obtained thanks to the parallel implementation of MAGFLOW on graphic processing units (GPUs). All this useful information has been gathered into the LAV@HAZARD platform which, due to the high degree of interactivity, allows generation of easily readable maps and a fast way to explore alternative scenarios. We will describe and demonstrate the operation of this framework using a variety of case studies pertaining to Mt Etna, Sicily. Although this study was conducted on Mt Etna, the approach used is designed to be applicable to other volcanic areas around the world.
Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model
Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza
2017-08-01
Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.
Modeling mechanical effects on promotion and retardation of martensitic transformation
Energy Technology Data Exchange (ETDEWEB)
Maalekian, Mehran, E-mail: mehran.maalekian@ubc.ca [Department of Materials Engineering, University of British Columbia, 309-6350 Stores Road, Vancouver, B.C. V61Z4 (Canada); Kozeschnik, Ernst [Christian Doppler Laboratory for ' Early Stages of Precipitation' , Institute of Materials Science and Technology, Vienna University of Technology (Austria)
2011-01-25
Research highlights: {yields} Compressive elastic stresses up to 250 MPa are applied in continuous cooling. {yields} Using the thermodynamic data and maximum value of the mechanical driving force the predicted increase in M{sub s} ({approx}0.1 K/MPa) is in agreement with experiment {yields} Austenite was deformed plastically at different temperatures (800 deg. C-1100 deg. C). {yields} High deformation temperature (i.e. 1100 deg. C) as well as low plastic strain (i.e. {epsilon}{sub ave} {approx} 30%) do not affect martensite transformation noticeably, whereas lower deformation temperature (e.g. 900 deg. C) and large plastic strain (i.e. {epsilon}{sub ave} {approx} 70%) retards martensite transformation. {yields} The theory of mechanical stabilization predicts the depression of M{sub s}. - Abstract: The influence of compressive stress and prior plastic deformation of austenite on the martensite transformation in a eutectoid steel is studied both experimentally and theoretically. It is demonstrated that martensite formation is assisted by stress but it is retarded when transformation occurs from deformed austenite. With the quantitative modeling of the problem based on the theory of displacive shear transformation, the explanation of the two opposite roles of mechanical treatment prior to or simultaneously to martensite transformation is presented.
BEHAVIORAL HAZARD IN HEALTH INSURANCE*
Baicker, Katherine; Mullainathan, Sendhil; Schwartzstein, Joshua
2015-01-01
A fundamental implication of standard moral hazard models is overuse of low-value medical care because copays are lower than costs. In these models, the demand curve alone can be used to make welfare statements, a fact relied on by much empirical work. There is ample evidence, though, that people misuse care for a different reason: mistakes, or “behavioral hazard.” Much high-value care is underused even when patient costs are low, and some useless care is bought even when patients face the full cost. In the presence of behavioral hazard, welfare calculations using only the demand curve can be off by orders of magnitude or even be the wrong sign. We derive optimal copay formulas that incorporate both moral and behavioral hazard, providing a theoretical foundation for value-based insurance design and a way to interpret behavioral “nudges.” Once behavioral hazard is taken into account, health insurance can do more than just provide financial protection—it can also improve health care efficiency. PMID:23930294
Miller, Cecelia R; Ruppert, Amy S; Heerema, Nyla A; Maddocks, Kami J; Labanowska, Jadwiga; Breidenbach, Heather; Lozanski, Gerard; Zhao, Weiqiang; Gordon, Amber L; Jones, Jeffrey A; Flynn, Joseph M; Jaglowski, Samantha M; Andritsos, Leslie A; Blum, Kristie A; T Awan, Farrukh; Rogers, Kerry A; Grever, Michael R; Johnson, Amy J; Abruzzo, Lynne V; Hertlein, Erin K; Blachly, James S; Woyach, Jennifer A; Byrd, John C
2017-08-22
Ibrutinib is a highly effective targeted therapy for chronic lymphocytic leukemia (CLL). However, ibrutinib must be discontinued in a subset of patients due to progressive CLL or transformation to aggressive lymphoma (Richter transformation). Transformation occurs early in the course of therapy and has an extremely poor prognosis. Thus, identification of prognostic markers associated with transformation is of utmost importance. Near-tetraploidy (4 copies of most chromosomes within a cell) has been reported in various lymphomas, but its incidence and significance in CLL has not been described. Using fluorescence in situ hybridization, we detected near-tetraploidy in 9 of 297 patients with CLL prior to beginning ibrutinib treatment on 1 of 4 clinical trials (3.0%; 95% confidence interval [CI], 1.4%-5.7%). Near-tetraploidy was associated with aggressive disease characteristics: Rai stage 3/4 ( P = .03), deletion 17p ( P = .03), and complex karyotype ( P = .01). Near-tetraploidy was also associated with ibrutinib discontinuation due to Richter transformation ( P transformation with diffuse large B-cell lymphoma. In a multivariable model, near-tetraploidy (hazard ratio [HR], 8.66; 95% CI, 3.83-19.59; P transformation. Our results suggest that near-tetraploidy is a potential prognostic marker for Richter transformation to assess in patients going on ibrutinib.
A vision based top-view transformation model for a vehicle parking assistant.
Lin, Chien-Chuan; Wang, Ming-Shi
2012-01-01
This paper proposes the Top-View Transformation Model for image coordinate transformation, which involves transforming a perspective projection image into its corresponding bird's eye vision. A fitting parameters searching algorithm estimates the parameters that are used to transform the coordinates from the source image. Using this approach, it is not necessary to provide any interior and exterior orientation parameters of the camera. The designed car parking assistant system can be installed at the rear end of the car, providing the driver with a clearer image of the area behind the car. The processing time can be reduced by storing and using the transformation matrix estimated from the first image frame for a sequence of video images. The transformation matrix can be stored as the Matrix Mapping Table, and loaded into the embedded platform to perform the transformation. Experimental results show that the proposed approaches can provide a clearer and more accurate bird's eye view to the vehicle driver.
A Vision Based Top-View Transformation Model for a Vehicle Parking Assistant
Directory of Open Access Journals (Sweden)
Chien-Chuan Lin
2012-03-01
Full Text Available This paper proposes the Top-View Transformation Model for image coordinate transformation, which involves transforming a perspective projection image into its corresponding bird’s eye vision. A fitting parameters searching algorithm estimates the parameters that are used to transform the coordinates from the source image. Using this approach, it is not necessary to provide any interior and exterior orientation parameters of the camera. The designed car parking assistant system can be installed at the rear end of the car, providing the driver with a clearer image of the area behind the car. The processing time can be reduced by storing and using the transformation matrix estimated from the first image frame for a sequence of video images. The transformation matrix can be stored as the Matrix Mapping Table, and loaded into the embedded platform to perform the transformation. Experimental results show that the proposed approaches can provide a clearer and more accurate bird’s eye view to the vehicle driver.
Transformation Strategies between Block-Oriented and Graph-Oriented Process Modelling Languages
DEFF Research Database (Denmark)
Mendling, Jan; Lassen, Kristian Bisgaard; Zdun, Uwe
to abstract from concrete transformationstrategies by distinguishing two major paradigms for process modelling languages:block-oriented languages (such as BPEL and BPML) and graph-oriented languages(such as EPCs and YAWL). The contribution of this paper are generic strategiesfor transforming from block......Much recent research work discusses the transformation between differentprocess modelling languages. This work, however, is mainly focussed on specific processmodelling languages, and thus the general reusability of the applied transformationconcepts is rather limited. In this paper, we aim......-oriented process languages to graph-oriented languages,and vice versa. We also present two case studies of applying our strategies....
Doubly sparse factor models for unifying feature transformation and feature selection
International Nuclear Information System (INIS)
Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato; Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko
2010-01-01
A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.
Doubly sparse factor models for unifying feature transformation and feature selection
Energy Technology Data Exchange (ETDEWEB)
Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato [ERATO, Okanoya Emotional Information Project, Japan Science Technology Agency, Saitama (Japan); Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko, E-mail: okada@k.u-tokyo.ac.j [Human Technology Research Institute, National Institute of Advanced Industrial Science and Technology, Ibaraki (Japan)
2010-06-01
A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.
A structure for models of hazardous materials with complex behavior
International Nuclear Information System (INIS)
Rodean, H.C.
1991-01-01
Most atmospheric dispersion models used to assess the environmental consequences of accidental releases of hazardous chemicals do not have the capability to simulate the pertinent chemical and physical processes associated with the release of the material and its mixing with the atmosphere. The purpose of this paper is to present a materials sub-model with the flexibility to simulate the chemical and physical behaviour of a variety of materials released into the atmosphere. The model, which is based on thermodynamic equilibrium, incorporates the ideal gas law, temperature-dependent vapor pressure equations, temperature-dependent dissociation reactions, and reactions with atmospheric water vapor. The model equations, written in terms of pressure ratios and dimensionless parameters, are used to construct equilibrium diagrams with temperature and the mass fraction of the material in the mixture as coordinates. The model's versatility is demonstrated by its application to the release of UF 6 and N 2 O 4 , two materials with very different physical and chemical properties. (author)
A novel concurrent pictorial choice model of mood-induced relapse in hazardous drinkers.
Hardy, Lorna; Hogarth, Lee
2017-12-01
This study tested whether a novel concurrent pictorial choice procedure, inspired by animal self-administration models, is sensitive to the motivational effect of negative mood induction on alcohol-seeking in hazardous drinkers. Forty-eight hazardous drinkers (scoring ≥7 on the Alcohol Use Disorders Inventory) recruited from the community completed measures of alcohol dependence, depression, and drinking coping motives. Baseline alcohol-seeking was measured by percent choice to enlarge alcohol- versus food-related thumbnail images in two alternative forced-choice trials. Negative and positive mood was then induced in succession by means of self-referential affective statements and music, and percent alcohol choice was measured after each induction in the same way as baseline. Baseline alcohol choice correlated with alcohol dependence severity, r = .42, p = .003, drinking coping motives (in two questionnaires, r = .33, p = .02 and r = .46, p = .001), and depression symptoms, r = .31, p = .03. Alcohol choice was increased by negative mood over baseline (p choice was not related to gender, alcohol dependence, drinking to cope, or depression symptoms (ps ≥ .37). The concurrent pictorial choice measure is a sensitive index of the relative value of alcohol, and provides an accessible experimental model to study negative mood-induced relapse mechanisms in hazardous drinkers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Energy Technology Data Exchange (ETDEWEB)
Rao Weifeng [Department of Materials Science and Engineering, Rutgers University, 607 Taylor Road, Piscataway, NJ 08854 (United States); Khachaturyan, Armen G., E-mail: khach@jove.rutgers.edu [Department of Materials Science and Engineering, Rutgers University, 607 Taylor Road, Piscataway, NJ 08854 (United States)
2011-06-15
A phase field theory of proper displacive transformations is developed to address the microstructure evolution and its response to applied fields in decomposing and martensitic systems. The theory is based on the explicit equation for the non-equilibrium free energy function of the transformation strain obtained by a consistent separation of the total strain into transformation and elastic strains. The transformation strain is considered to be a relaxing long-range order parameter evolving in accordance with the system energetics rather than as a fixed material constant used in the conventional Eshelby theory of coherent inclusions. The elastic strain is defined as a coherency strain recovering the crystal lattice compatibility. The obtained free energy function of the transformation strain leads to the concepts of structural anisotropy and directional flexibility of low symmetry phases. The formulated vector model of displacive transformation makes apparent a similarity between proper displacive transformation and ferromagnetic/ferroelectric transformation and, in particular, a similarity between the structural anisotropy and magnetic/polar anisotropy of ferromagnetic/ferroelectric materials. It even predicts the feasibility of a glass-like structural state with unlimited directional flexibility of the transformation strain that is conceptually similar to a ferromagnetic glass. The thermodynamics of the equilibrium between low symmetry phases and the thermodynamic conditions leading to the formation of adaptive states are formulated.
Vatankhah, Soudabeh; Alirezaei, Samira; Khosravizadeh, Omid; Mirbahaeddin, Seyyed Elmira; Alikhani, Mahtab; Alipanah, Mobarakeh
2017-08-01
In today's transforming world, increased productivity and efficient use of existing facilities are practically beyond a choice and become a necessity. In this line, attention to change and transformation is one of the affecting factors on the growth of productivity in organizations, especially in hospitals. To examine the effect of transformational leadership on the productivity of employees in teaching hospitals affiliated to Iran University of Medical Sciences. This cross-sectional study was conducted on 254 participants from educational and medical centers affiliated to Iran University of Medical Sciences (Tehran, Iran) in 2016. The standard questionnaires of Bass & Avolio and of Hersi & Goldsmith were used to respectively assess transformational leadership and level of productivity. The research assumptions were tested in a significance level of 0.05 by applying descriptive statistics and structural equations modeling (SEM) using SPSS 19 and Amos 24. Results of the fitting indicators of the assessing model after amending includes Chi-square two to degrees of freedom of 2.756, CFI indicator 0.95, IFI indicator 0.92, Root mean square error of approximation (RMSEA) indicator 0.10. These results indicate that the assessing model is well fitting after the amendment. Also, analysis of the model's assumptions and the final model of the research reveals the effect of transformational leadership on employees' productivity with a significance level of 0.83 (p=0.001). This research indicates that the more the leadership and decision-making style in hospitals lean towards transformational mode, the more positive outcomes it brings among employees and the organization due to increased productivity. Therefore, it is essential to pay focused attention to training/educational programs in organizations to create and encourage transformational leadership behaviors which hopefully lead to more productive employees.
Bayesian inference method for stochastic damage accumulation modeling
International Nuclear Information System (INIS)
Jiang, Xiaomo; Yuan, Yong; Liu, Xian
2013-01-01
Damage accumulation based reliability model plays an increasingly important role in successful realization of condition based maintenance for complicated engineering systems. This paper developed a Bayesian framework to establish stochastic damage accumulation model from historical inspection data, considering data uncertainty. Proportional hazards modeling technique is developed to model the nonlinear effect of multiple influencing factors on system reliability. Different from other hazard modeling techniques such as normal linear regression model, the approach does not require any distribution assumption for the hazard model, and can be applied for a wide variety of distribution models. A Bayesian network is created to represent the nonlinear proportional hazards models and to estimate model parameters by Bayesian inference with Markov Chain Monte Carlo simulation. Both qualitative and quantitative approaches are developed to assess the validity of the established damage accumulation model. Anderson–Darling goodness-of-fit test is employed to perform the normality test, and Box–Cox transformation approach is utilized to convert the non-normality data into normal distribution for hypothesis testing in quantitative model validation. The methodology is illustrated with the seepage data collected from real-world subway tunnels.
BOX-COX transformation and random regression models for fecal egg count data
Directory of Open Access Journals (Sweden)
Marcos Vinicius Silva
2012-01-01
Full Text Available Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants fecal egg count (FEC is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used to achieve normality before analysis. However, the transformed data are often not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6,375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (covariance components. We also proposed using random regression models (RRM for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4 adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.
Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.
da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C
2011-01-01
Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.
Transform Methods for Precision Nonlinear Wave Models of Flexible space Structures
1990-08-20
developed, each of which has motivated a structural control methodology in a natural way. The Transform Element Modelling (TEM) approach uses the Laplace...IEk A L 2 = -, c G= ( C .3 a ,b ) Talng the Laplace transfor-m (neglecting initial conditions) )ields [1+tjSZ-(,s) +S ((X’S) + al2a~ pS4 (X’S) j(X’s) (04
ANIMATION STRATEGIES FOR SMOOTH TRANSFORMATIONS BETWEEN DISCRETE LODS OF 3D BUILDING MODELS
Directory of Open Access Journals (Sweden)
M. Kada
2016-06-01
Full Text Available The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.
Thermokinetic Modeling of Phase Transformation in the Laser Powder Deposition Process
Foroozmehr, Ehsan; Kovacevic, Radovan
2009-08-01
A finite element model coupled with a thermokinetic model is developed to predict the phase transformation of the laser deposition of AISI 4140 on a substrate with the same material. Four different deposition patterns, long-bead, short-bead, spiral-in, and spiral-out, are used to cover a similar area. Using a finite element model, the temperature history of the laser powder deposition (LPD) process is determined. The martensite transformation as well as martensite tempering is considered to calculate the final fraction of martensite, ferrite, cementite, ɛ-carbide, and retained austenite. Comparing the surface hardness topography of different patterns reveals that path planning is a critical parameter in laser surface modification. The predicted results are in a close agreement with the experimental results.
Lohe, M. A.
2018-06-01
We generalize the Watanabe–Strogatz (WS) transform, which acts on the Kuramoto model in d = 2 dimensions, to a higher-dimensional vector transform which operates on vector oscillator models of synchronization in any dimension , for the case of identical frequency matrices. These models have conserved quantities constructed from the cross ratios of inner products of the vector variables, which are invariant under the vector transform, and have trajectories which lie on the unit sphere S d‑1. Application of the vector transform leads to a partial integration of the equations of motion, leaving independent equations to be solved, for any number of nodes N. We discuss properties of complete synchronization and use the reduced equations to derive a stability condition for completely synchronized trajectories on S d‑1. We further generalize the vector transform to a mapping which acts in and in particular preserves the unit ball , and leaves invariant the cross ratios constructed from inner products of vectors in . This mapping can be used to partially integrate a system of vector oscillators with trajectories in , and for d = 2 leads to an extension of the Kuramoto system to a system of oscillators with time-dependent amplitudes and trajectories in the unit disk. We find an inequivalent generalization of the Möbius map which also preserves but leaves invariant a different set of cross ratios, this time constructed from the vector norms. This leads to a different extension of the Kuramoto model with trajectories in the complex plane that can be partially integrated by means of fractional linear transformations.
Numerical modelling of tools steel hardening. A thermal phenomena and phase transformations
Directory of Open Access Journals (Sweden)
T. Domański
2010-01-01
Full Text Available This paper the model hardening of tool steel takes into considerations of thermal phenomena and phase transformations in the solid state are presented. In the modelling of thermal phenomena the heat equations transfer has been solved by Finite Elements Method. The graph of continuous heating (CHT and continuous cooling (CCT considered steel are used in the model of phase transformations. Phase altered fractions during the continuous heating austenite and continuous cooling pearlite or bainite are marked in the model by formula Johnson-Mehl and Avrami. For rate of heating >100 K/s the modified equation Koistinen and Marburger is used. Modified equation Koistinen and Marburger identify the forming fraction of martensite.
Shang, Han Lin
2015-01-01
The Box-Cox transformation can sometimes yield noticeable improvements in model simplicity, variance homogeneity and precision of estimation, such as in modelling and forecasting age-specific fertility. Despite its importance, there have been few studies focusing on the optimal selection of Box-Cox transformation parameters in demographic forecasting. A simple method is proposed for selecting the optimal Box-Cox transformation parameter, along with an algorithm based on an in-sample forecast ...
A hybrid Scatter/Transform cloaking model
Directory of Open Access Journals (Sweden)
Gad Licht
2015-01-01
Full Text Available A new Scatter/Transform cloak is developed that combines the light bending of refraction characteristic of a Transform cloak with the scatter cancellation characteristic of a Scatter cloak. The hybrid cloak incorporates both Transform’s variable index of refraction with modified linear intrusions to maximize the Scatter cloak effect. Scatter/Transform improved the scattering cross-section of cloaking in a 2-dimensional space to 51.7% compared to only 39.6% or 45.1% respectively with either Scatter or Transform alone. Metamaterials developed with characteristics based on the new ST hybrid cloak will exhibit superior cloaking capabilities.
Household hazardous waste disposal to landfill: Using LandSim to model leachate migration
International Nuclear Information System (INIS)
Slack, Rebecca J.; Gronow, Jan R.; Hall, David H.; Voulvoulis, Nikolaos
2007-01-01
Municipal solid waste (MSW) landfill leachate contains a number of aquatic pollutants. A specific MSW stream often referred to as household hazardous waste (HHW) can be considered to contribute a large proportion of these pollutants. This paper describes the use of the LandSim (Landfill Performance Simulation) modelling program to assess the environmental consequences of leachate release from a generic MSW landfill in receipt of co-disposed HHW. Heavy metals and organic pollutants were found to migrate into the zones beneath a model landfill site over a 20,000-year period. Arsenic and chromium were found to exceed European Union and US-EPA drinking water standards at the unsaturated zone/aquifer interface, with levels of mercury and cadmium exceeding minimum reporting values (MRVs). The findings demonstrate the pollution potential arising from HHW disposal with MSW. - Aquatic pollutants linked to the disposal of household hazardous waste in municipal landfills have the potential to exist in soil and groundwater for many years
The effect of scale in daily precipitation hazard assessment
Directory of Open Access Journals (Sweden)
J. J. Egozcue
2006-01-01
Full Text Available Daily precipitation is recorded as the total amount of water collected by a rain-gauge in 24 h. Events are modelled as a Poisson process and the 24 h precipitation by a Generalised Pareto Distribution (GPD of excesses. Hazard assessment is complete when estimates of the Poisson rate and the distribution parameters, together with a measure of their uncertainty, are obtained. The shape parameter of the GPD determines the support of the variable: Weibull domain of attraction (DA corresponds to finite support variables as should be for natural phenomena. However, Fréchet DA has been reported for daily precipitation, which implies an infinite support and a heavy-tailed distribution. Bayesian techniques are used to estimate the parameters. The approach is illustrated with precipitation data from the Eastern coast of the Iberian Peninsula affected by severe convective precipitation. The estimated GPD is mainly in the Fréchet DA, something incompatible with the common sense assumption of that precipitation is a bounded phenomenon. The bounded character of precipitation is then taken as a priori hypothesis. Consistency of this hypothesis with the data is checked in two cases: using the raw-data (in mm and using log-transformed data. As expected, a Bayesian model checking clearly rejects the model in the raw-data case. However, log-transformed data seem to be consistent with the model. This fact may be due to the adequacy of the log-scale to represent positive measurements for which differences are better relative than absolute.
Lee, Saro; Park, Inhye
2013-09-30
Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. Copyright © 2013. Published by Elsevier Ltd.
International Nuclear Information System (INIS)
Paris, P.
1989-11-01
This report describes a model which may be used to derive hazardous waste concentration limits in order to prevent ground water pollution from a landfill disposal. First the leachate concentration limits are determined taking into account the attenuation capacity of the landfill-site as a whole; waste concentrations are then derived by an elution model which assumes a constant ratio between liquid-solid concentrations. In the example two types of landfill have been considered and in each case concentration limits have been calculated for some hazardous substances and compared with the corresponding regulatory limits. (author)
A novel compact model for on-chip stacked transformers in RF-CMOS technology
Jun, Liu; Jincai, Wen; Qian, Zhao; Lingling, Sun
2013-08-01
A novel compact model for on-chip stacked transformers is presented. The proposed model topology gives a clear distinction to the eddy current, resistive and capacitive losses of the primary and secondary coils in the substrate. A method to analytically determine the non-ideal parasitics between the primary coil and substrate is provided. The model is further verified by the excellent match between the measured and simulated S -parameters on the extracted parameters for a 1 : 1 stacked transformer manufactured in a commercial RF-CMOS technology.
A New Perceptual Mapping Model Using Lifting Wavelet Transform
Taha TahaBasheer; Ehkan Phaklen; Ngadiran Ruzelita
2017-01-01
Perceptual mappingapproaches have been widely used in visual information processing in multimedia and internet of things (IOT) applications. Accumulative Lifting Difference (ALD) is proposed in this paper as texture mapping model based on low-complexity lifting wavelet transform, and combined with luminance masking for creating an efficient perceptual mapping model to estimate Just Noticeable Distortion (JND) in digital images. In addition to low complexity operations, experiments results sho...
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The
A random effects meta-analysis model with Box-Cox transformation
Directory of Open Access Journals (Sweden)
Yusuke Yamaguchi
2017-07-01
Full Text Available Abstract Background In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. Methods We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. Results A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and
Kinetic model for transformation from nano-sized amorphous $TiO_2$ to anatase
Madras, Giridhar; McCoy, Benjamin J
2006-01-01
We propose a kinetic model for the transformation of nano-sized amorphous $TiO_2$ to anatase with associated coarsening by coalescence. Based on population balance (distribution kinetics) equations for the size distributions, the model applies a first-order rate expression for transformation combined with Smoluchowski coalescence for the coarsening particles. Size distribution moments (number and mass of particles) lead to dynamic expressions for extent of reaction and average anatase particl...
Wavelet transform-vector quantization compression of supercomputer ocean model simulation output
Energy Technology Data Exchange (ETDEWEB)
Bradley, J N; Brislawn, C M
1992-11-12
We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.
Evaluation and control of radon daughter hazards in uranium mines
International Nuclear Information System (INIS)
Holaday, D.A.
1974-11-01
This monograph discusses primarily those health hazards to uranium miners which are produced by exposure to ionizing radiation. Emphasis is placed on the areas of evaluation of exposures to the radioactive gas radon-222 and its short-lived transformation products, and methods of controlling such exposures. A limited discussion of the biological effects of radon and radon daughters is undertaken, and some procedures are given for evaluating hazards created by other common contaminants of mine atmospheres. A large amount of information exists on these topics, some of which is unpublished or is not readily available. While efforts were made to obtain data from all sources, undoubtedly some valuable work was overlooked. The monograph is an endeavor to assemble pertinent information and make it available to those who are concerned with producing uranium at minimal risks. Where they were available, a variety of procedures for evaluating hazards are given, and examples of systems for controlling hazards are included. 154 references
Penenko, Alexey; Penenko, Vladimir; Tsvetova, Elena; Antokhin, Pavel
2016-04-01
The work is devoted to data assimilation algorithm for atmospheric chemistry transport and transformation models. In the work a control function is introduced into the model source term (emission rate) to provide flexibility to adjust to data. This function is evaluated as the constrained minimum of the target functional combining a control function norm with a norm of the misfit between measured data and its model-simulated analog. Transport and transformation processes model is acting as a constraint. The constrained minimization problem is solved with Euler-Lagrange variational principle [1] which allows reducing it to a system of direct, adjoint and control function estimate relations. This provides a physically-plausible structure of the resulting analysis without model error covariance matrices that are sought within conventional approaches to data assimilation. High dimensionality of the atmospheric chemistry models and a real-time mode of operation demand for computational efficiency of the data assimilation algorithms. Computational issues with complicated models can be solved by using a splitting technique. Within this approach a complex model is split to a set of relatively independent simpler models equipped with a coupling procedure. In a fine-grained approach data assimilation is carried out quasi-independently on the separate splitting stages with shared measurement data [2]. In integrated schemes data assimilation is carried out with respect to the split model as a whole. We compare the two approaches both theoretically and numerically. Data assimilation on the transport stage is carried out with a direct algorithm without iterations. Different algorithms to assimilate data on nonlinear transformation stage are compared. In the work we compare data assimilation results for both artificial and real measurement data. With these data we study the impact of transformation processes and data assimilation to the performance of the modeling system [3]. The
Combining computational models for landslide hazard assessment of Guantánamo province, Cuba
Castellanos Abella, E.A.
2008-01-01
As part of the Cuban system for landslide disaster management, a methodology was developed for regional scale landslide hazard assessment, which is a combination of different models. The method was applied in Guantánamo province at 1:100 000 scale. The analysis started with an extensive aerial
3DXRD characterization and modeling of solid-state transformation processes
DEFF Research Database (Denmark)
Juul Jensen, Dorte; Offerman, S.E.; Sietsma, J.
2008-01-01
of metallic microstructures with much more detail than hitherto possible. Among these modeling activities are three-dimensional (3D) geometric modeling, 3D molecular dynamics modeling, 3D phase-field modeling, two-dimensional (2D) cellular automata, and 2D Monte Carlo simulations....... data valuable for validation of various models of microstructural evolution is discussed, Examples of 3DXRD measurements related to recrystallization and to solid-state phase transformations in metals are described. 3DXRD measurements have led to new modeling activity predicting the evolution...
Modeling of nitrogen transformation in an integrated multi-trophic aquaculture (IMTA)
Silfiana; Widowati; Putro, S. P.; Udjiani, T.
2018-03-01
The dynamic model of nitrogen transformation in IMTA (Integrated Multi-Trophic Aquaculture) is purposed. IMTA is a polyculture with several biotas maintained in it to optimize waste recycling as a food source. The purpose of this paper is to predict nitrogen decrease and nitrogen transformation in IMTA consisting of ammonia (NH3), Nitrite (NO2) and Nitrate (NO3). Nitrogen transformation of several processes, nitrification, assimilation, and volatilization. Numerical simulations are performed by providing initial parameters and values based on a review of previous research. The numerical results show that the rate of change in nitrogen concentration in IMTA decrease and reaches stable at different times.
Butler, David R.; Walsh, Stephen J.; Brown, Daniel G.
1991-01-01
Methods are described for using Landsat Thematic Mapper digital data and digital elevation models for the display of natural hazard sites in a mountainous region of northwestern Montana, USA. Hazard zones can be easily identified on the three-dimensional images. Proximity of facilities such as highways and building locations to hazard sites can also be easily displayed. A temporal sequence of Landsat TM (or similar) satellite data sets could also be used to display landscape changes associated with dynamic natural hazard processes.
TRENT2D WG: a smart web infrastructure for debris-flow modelling and hazard assessment
Zorzi, Nadia; Rosatti, Giorgio; Zugliani, Daniel; Rizzi, Alessandro; Piffer, Stefano
2016-04-01
Mountain regions are naturally exposed to geomorphic flows, which involve large amounts of sediments and induce significant morphological modifications. The physical complexity of this class of phenomena represents a challenging issue for modelling, leading to elaborate theoretical frameworks and sophisticated numerical techniques. In general, geomorphic-flows models proved to be valid tools in hazard assessment and management. However, model complexity seems to represent one of the main obstacles to the diffusion of advanced modelling tools between practitioners and stakeholders, although the UE Flood Directive (2007/60/EC) requires risk management and assessment to be based on "best practices and best available technologies". Furthermore, several cutting-edge models are not particularly user-friendly and multiple stand-alone software are needed to pre- and post-process modelling data. For all these reasons, users often resort to quicker and rougher approaches, leading possibly to unreliable results. Therefore, some effort seems to be necessary to overcome these drawbacks, with the purpose of supporting and encouraging a widespread diffusion of the most reliable, although sophisticated, modelling tools. With this aim, this work presents TRENT2D WG, a new smart modelling solution for the state-of-the-art model TRENT2D (Armanini et al., 2009, Rosatti and Begnudelli, 2013), which simulates debris flows and hyperconcentrated flows adopting a two-phase description over a mobile bed. TRENT2D WG is a web infrastructure joining advantages offered by the software-delivering model SaaS (Software as a Service) and by WebGIS technology and hosting a complete and user-friendly working environment for modelling. In order to develop TRENT2D WG, the model TRENT2D was converted into a service and exposed on a cloud server, transferring computational burdens from the user hardware to a high-performing server and reducing computational time. Then, the system was equipped with an
Modeling biochemical transformation processes and information processing with Narrator
Directory of Open Access Journals (Sweden)
Palfreyman Niall M
2007-03-01
Full Text Available Abstract Background Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs, which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Results Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Conclusion Narrator is a
A New Perceptual Mapping Model Using Lifting Wavelet Transform
Directory of Open Access Journals (Sweden)
Taha TahaBasheer
2017-01-01
Full Text Available Perceptual mappingapproaches have been widely used in visual information processing in multimedia and internet of things (IOT applications. Accumulative Lifting Difference (ALD is proposed in this paper as texture mapping model based on low-complexity lifting wavelet transform, and combined with luminance masking for creating an efficient perceptual mapping model to estimate Just Noticeable Distortion (JND in digital images. In addition to low complexity operations, experiments results show that the proposed modelcan tolerate much more JND noise than models proposed before
A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM
Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan
2018-03-01
In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.
Yu. A. Rounov; O. G. Shirokov; D. I. Zalizny; D. M. Los
2004-01-01
The paper proposes a thermal model of a power oil-immersed transformer as a system of four homogeneous bodies: winding, oil, core and cooling medium. On the basis of experimental data it is shown that such model describes more precisely actual thermal processes taking place in a transformer than the thermal model accepted in GOST 14209-85.
Modelling of Singapore's topographic transformation based on DEMs
Wang, Tao; Belle, Iris; Hassler, Uta
2015-02-01
Singapore's topography has been heavily transformed by industrialization and urbanization processes. To investigate topographic changes and evaluate soil mass flows, historical topographic maps of 1924 and 2012 were employed, and basic topographic features were vectorized. Digital elevation models (DEMs) for the two years were reconstructed based on vector features. Corresponding slope maps, a surface difference map and a scatter plot of elevation changes were generated and used to quantify and categorize the nature of the topographic transformation. The surface difference map is aggregated into five main categories of changes: (1) areas without significant height changes, (2) lowered-down areas where hill ranges were cut down, (3) raised-up areas where valleys and swamps were filled in, (4) reclaimed areas from the sea, and (5) new water-covered areas. Considering spatial proximity and configurations of different types of changes, topographic transformation can be differentiated as either creating inland flat areas or reclaiming new land from the sea. Typical topographic changes are discussed in the context of Singapore's urbanization processes. The two slope maps and elevation histograms show that generally, the topographic surface of Singapore has become flatter and lower since 1924. More than 89% of height changes have happened within a range of 20 m and 95% have been below 40 m. Because of differences in land surveying and map drawing methods, uncertainties and inaccuracies inherent in the 1924 topographic maps are discussed in detail. In this work, a modified version of a traditional scatter plot is used to present height transformation patterns intuitively. This method of deriving categorical maps of topographical changes from a surface difference map can be used in similar studies to qualitatively interpret transformation. Slope maps and histograms were also used jointly to reveal additional patterns of topographic change.
PHAZE, Parametric Hazard Function Estimation
International Nuclear Information System (INIS)
2002-01-01
1 - Description of program or function: Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions. 2 - Methods: PHAZE assumes that the failures of a component follow a time-dependent (or non-homogenous) Poisson process and that the failure counts in non-overlapping time intervals are independent. Implicit in the independence property is the assumption that the component is restored to service immediately after any failure, with negligible repair time. The failures of one component are assumed to be independent of those of another component; a proportional hazards model is used. Data for a component are called time censored if the component is observed for a fixed time-period, or plant records covering a fixed time-period are examined, and the failure times are recorded. The number of these failures is random. Data are called failure censored if the component is kept in service until a predetermined number of failures has occurred, at which time the component is removed from service. In this case, the number of failures is fixed, but the end of the observation period equals the final failure time and is random. A typical PHAZE session consists of reading failure data from a file prepared previously, selecting one of the three models, and performing data analysis (i.e., performing the usual statistical inference about the parameters of the model, with special emphasis on the parameter(s) that determine whether the hazard function is increasing). The final goals of the inference are a point estimate
VHub - Cyberinfrastructure for volcano eruption and hazards modeling and simulation
Valentine, G. A.; Jones, M. D.; Bursik, M. I.; Calder, E. S.; Gallo, S. M.; Connor, C.; Carn, S. A.; Rose, W. I.; Moore-Russo, D. A.; Renschler, C. S.; Pitman, B.; Sheridan, M. F.
2009-12-01
Volcanic risk is increasing as populations grow in active volcanic regions, and as national economies become increasingly intertwined. In addition to their significance to risk, volcanic eruption processes form a class of multiphase fluid dynamics with rich physics on many length and time scales. Risk significance, physics complexity, and the coupling of models to complex dynamic spatial datasets all demand the development of advanced computational techniques and interdisciplinary approaches to understand and forecast eruption dynamics. Innovative cyberinfrastructure is needed to enable global collaboration and novel scientific creativity, while simultaneously enabling computational thinking in real-world risk mitigation decisions - an environment where quality control, documentation, and traceability are key factors. Supported by NSF, we are developing a virtual organization, referred to as VHub, to address this need. Overarching goals of the VHub project are: Dissemination. Make advanced modeling and simulation capabilities and key data sets readily available to researchers, students, and practitioners around the world. Collaboration. Provide a mechanism for participants not only to be users but also co-developers of modeling capabilities, and contributors of experimental and observational data sets for use in modeling and simulation, in a collaborative environment that reaches far beyond local work groups. Comparison. Facilitate comparison between different models in order to provide the practitioners with guidance for choosing the "right" model, depending upon the intended use, and provide a platform for multi-model analysis of specific problems and incorporation into probabilistic assessments. Application. Greatly accelerate access and application of a wide range of modeling tools and related data sets to agencies around the world that are charged with hazard planning, mitigation, and response. Education. Provide resources that will promote the training of the
Energy Technology Data Exchange (ETDEWEB)
Ozolin, Y.E.; Karol, I.L. [Main Geophysical Observatory, St. Petersburg (Russian Federation); Ramaroson, R. [Office National d`Etudes et de Recherches Aerospatiales (ONERA), 92 - Chatillon (France)
1997-12-31
Box model for coupled gaseous and aqueous phases is used for sensitivity study of potential transformation of trace gases in a cloud environment. The rate of this transformation decreases with decreasing of pH in droplets, with decreasing of photodissociation rates inside the cloud and with increasing of the droplet size. Model calculations show the potential formation of H{sub 2}O{sub 2} in aqueous phase and transformation of gaseous HNO{sub 3} into NO{sub x} in a cloud. This model is applied for exploration of aircraft exhausts evolution in plume inside a cloud. (author) 10 refs.
Energy Technology Data Exchange (ETDEWEB)
Ozolin, Y E; Karol, I L [Main Geophysical Observatory, St. Petersburg (Russian Federation); Ramaroson, R [Office National d` Etudes et de Recherches Aerospatiales (ONERA), 92 - Chatillon (France)
1998-12-31
Box model for coupled gaseous and aqueous phases is used for sensitivity study of potential transformation of trace gases in a cloud environment. The rate of this transformation decreases with decreasing of pH in droplets, with decreasing of photodissociation rates inside the cloud and with increasing of the droplet size. Model calculations show the potential formation of H{sub 2}O{sub 2} in aqueous phase and transformation of gaseous HNO{sub 3} into NO{sub x} in a cloud. This model is applied for exploration of aircraft exhausts evolution in plume inside a cloud. (author) 10 refs.
Jazebi, Saeed
This thesis is a step forward toward achieving the final objective of creating a fully dual model for transformers including eddy currents and nonlinearities of the iron core using the fundamental electrical components already available in the EMTP-type programs. The model is effective for the study of the performance of transformers during power system transients. This is very important for transformer designers, because the insulation of transformers is determined with the overvoltages caused by lightning or switching operations. There are also internally induced transients that occur when a switch is actuated. For example switching actions for reconfiguration of distribution systems that offers economic advantages, or protective actions to clear faults and large short-circuit currents. Many of the smart grid concepts currently under development by many utilities rely heavily on switching to optimize resources that produce transients in the system. On the other hand, inrush currents produce mechanical forces which deform transformer windings and cause malfunction of the differential protection. Also, transformer performance under ferroresonance and geomagnetic induced currents are necessary to study. In this thesis, a physically consistent dual model applicable to single-phase two-winding transformers is proposed. First, the topology of a dual electrical equivalent circuit is obtained from the direct application of the principle of duality. Then, the model parameters are computed considering the variations of the transformer electromagnetic behavior under various operating conditions. Current modeling techniques use different topological models to represent diverse transient situations. The reversible model proposed in this thesis unifies the terminal and topological equivalent circuits. The model remains invariable for all low-frequency transients including deep saturation conditions driven from any of the two windings. The very high saturation region of the
MODELING AND SHIFTING FOCUS AS A FACILITATOR FOR INTENTIONAL EMERGENCE IN TRANSFORMATION DESIGN
DEFF Research Database (Denmark)
Nielsen, Louise Møller; Hansen, Poul H. Kyvsgård; Mabogunje, Ade
2009-01-01
In this paper we discuss the phenomenon "intentional emergence" in a transformation design context. We examine modeling and play enablers for intentional emergence and report on experiences with the Lego Serious Play method. The empirical observations are based on a real-time transformation design...
Vatankhah, Soudabeh; Alirezaei, Samira; Khosravizadeh, Omid; Mirbahaeddin, Seyyed Elmira; Alikhani, Mahtab; Alipanah, Mobarakeh
2017-01-01
Background In today’s transforming world, increased productivity and efficient use of existing facilities are practically beyond a choice and become a necessity. In this line, attention to change and transformation is one of the affecting factors on the growth of productivity in organizations, especially in hospitals. Aim To examine the effect of transformational leadership on the productivity of employees in teaching hospitals affiliated to Iran University of Medical Sciences. Methods This cross-sectional study was conducted on 254 participants from educational and medical centers affiliated to Iran University of Medical Sciences (Tehran, Iran) in 2016. The standard questionnaires of Bass & Avolio and of Hersi & Goldsmith were used to respectively assess transformational leadership and level of productivity. The research assumptions were tested in a significance level of 0.05 by applying descriptive statistics and structural equations modeling (SEM) using SPSS 19 and Amos 24. Results Results of the fitting indicators of the assessing model after amending includes Chi-square two to degrees of freedom of 2.756, CFI indicator 0.95, IFI indicator 0.92, Root mean square error of approximation (RMSEA) indicator 0.10. These results indicate that the assessing model is well fitting after the amendment. Also, analysis of the model’s assumptions and the final model of the research reveals the effect of transformational leadership on employees’ productivity with a significance level of 0.83 (p=0.001). Conclusion This research indicates that the more the leadership and decision-making style in hospitals lean towards transformational mode, the more positive outcomes it brings among employees and the organization due to increased productivity. Therefore, it is essential to pay focused attention to training/educational programs in organizations to create and encourage transformational leadership behaviors which hopefully lead to more productive employees. PMID:28979731
van der Net, Jeroen B.; Janssens, A. Cecile J. W.; Eijkemans, Marinus J. C.; Kastelein, John J. P.; Sijbrands, Eric J. G.; Steyerberg, Ewout W.
2008-01-01
Cross-sectional genetic association studies can be analyzed using Cox proportional hazards models with age as time scale, if age at onset of disease is known for the cases and age at data collection is known for the controls. We assessed to what degree and under what conditions Cox proportional
A "mental models" approach to the communication of subsurface hydrology and hazards
Gibson, Hazel; Stewart, Iain S.; Pahl, Sabine; Stokes, Alison
2016-05-01
Communicating information about geological and hydrological hazards relies on appropriately worded communications targeted at the needs of the audience. But what are these needs, and how does the geoscientist discern them? This paper adopts a psychological "mental models" approach to assess the public perception of the geological subsurface, presenting the results of attitudinal studies and surveys in three communities in the south-west of England. The findings reveal important preconceptions and misconceptions regarding the impact of hydrological systems and hazards on the geological subsurface, notably in terms of the persistent conceptualisation of underground rivers and the inferred relations between flooding and human activity. The study demonstrates how such mental models can provide geoscientists with empirical, detailed and generalised data of perceptions surrounding an issue, as well reveal unexpected outliers in perception that they may not have considered relevant, but which nevertheless may locally influence communication. Using this approach, geoscientists can develop information messages that more directly engage local concerns and create open engagement pathways based on dialogue, which in turn allow both geoscience "experts" and local "non-experts" to come together and understand each other more effectively.
Directory of Open Access Journals (Sweden)
Yu. A. Rounov
2004-01-01
Full Text Available The paper proposes a thermal model of a power oil-immersed transformer as a system of four homogeneous bodies: winding, oil, core and cooling medium. On the basis of experimental data it is shown that such model describes more precisely actual thermal processes taking place in a transformer than the thermal model accepted in GOST 14209-85.
Transformational change in health care systems: an organizational model.
Lukas, Carol VanDeusen; Holmes, Sally K; Cohen, Alan B; Restuccia, Joseph; Cramer, Irene E; Shwartz, Michael; Charns, Martin P
2007-01-01
The Institute of Medicine's 2001 report Crossing the Quality Chasm argued for fundamental redesign of the U.S. health care system. Six years later, many health care organizations have embraced the report's goals, but few have succeeded in making the substantial transformations needed to achieve those aims. This article offers a model for moving organizations from short-term, isolated performance improvements to sustained, reliable, organization-wide, and evidence-based improvements in patient care. Longitudinal comparative case studies were conducted in 12 health care systems using a mixed-methods evaluation design based on semistructured interviews and document review. Participating health care systems included seven systems funded through the Robert Wood Johnson Foundation's Pursuing Perfection Program and five systems with long-standing commitments to improvement and high-quality care. Five interactive elements appear critical to successful transformation of patient care: (1) Impetus to transform; (2) Leadership commitment to quality; (3) Improvement initiatives that actively engage staff in meaningful problem solving; (4) Alignment to achieve consistency of organization goals with resource allocation and actions at all levels of the organization; and (5) Integration to bridge traditional intra-organizational boundaries among individual components. These elements drive change by affecting the components of the complex health care organization in which they operate: (1) Mission, vision, and strategies that set its direction and priorities; (2) Culture that reflects its informal values and norms; (3) Operational functions and processes that embody the work done in patient care; and (4) Infrastructure such as information technology and human resources that support the delivery of patient care. Transformation occurs over time with iterative changes being sustained and spread across the organization. The conceptual model holds promise for guiding health care
Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.
2016-11-01
Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.
Energy Technology Data Exchange (ETDEWEB)
Lai Wei, E-mail: laiwei@msu.ed [Department of Chemical Engineering and Materials Science, Michigan State University, East Lansing, MI 48824 (United States); Ciucci, Francesco [Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences, University of Heidelberg, INF 368 D - 69120 Heidelberg (Germany)
2010-12-15
Thermodynamics and kinetics of phase transformation in intercalation battery electrodes are investigated by phenomenological models which include a mean-field lattice-gas thermodynamic model and a generalized Poisson-Nernst-Planck equation set based on linear irreversible thermodynamics. The application of modeling to a porous intercalation electrode leads to a hierarchical equivalent circuit with elements of explicit physical meanings. The equivalent circuit corresponding to the intercalation particle of planar, cylindrical and spherical symmetry is reduced to a diffusion equation with concentration dependent diffusivity. The numerical analysis of the diffusion equation suggests the front propagation behavior during phase transformation. The present treatment is also compared with the conventional moving boundary and phase field approaches.
Directory of Open Access Journals (Sweden)
Daniel Asare-Kyei
2015-07-01
Full Text Available Robust risk assessment requires accurate flood intensity area mapping to allow for the identification of populations and elements at risk. However, available flood maps in West Africa lack spatial variability while global datasets have resolutions too coarse to be relevant for local scale risk assessment. Consequently, local disaster managers are forced to use traditional methods such as watermarks on buildings and media reports to identify flood hazard areas. In this study, remote sensing and Geographic Information System (GIS techniques were combined with hydrological and statistical models to delineate the spatial limits of flood hazard zones in selected communities in Ghana, Burkina Faso and Benin. The approach involves estimating peak runoff concentrations at different elevations and then applying statistical methods to develop a Flood Hazard Index (FHI. Results show that about half of the study areas fall into high intensity flood zones. Empirical validation using statistical confusion matrix and the principles of Participatory GIS show that flood hazard areas could be mapped at an accuracy ranging from 77% to 81%. This was supported with local expert knowledge which accurately classified 79% of communities deemed to be highly susceptible to flood hazard. The results will assist disaster managers to reduce the risk to flood disasters at the community level where risk outcomes are first materialized.
Esscher transforms and the minimal entropy martingale measure for exponential Lévy models
DEFF Research Database (Denmark)
Hubalek, Friedrich; Sgarra, C.
In this paper we offer a systematic survey and comparison of the Esscher martingale transform for linear processes, the Esscher martingale transform for exponential processes, and the minimal entropy martingale measure for exponential lévy models and present some new results in order to give...
DEFF Research Database (Denmark)
Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen
2014-01-01
We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...
Landslides Hazard Assessment Using Different Approaches
Directory of Open Access Journals (Sweden)
Coman Cristina
2017-06-01
Full Text Available Romania represents one of Europe’s countries with high landslides occurrence frequency. Landslide hazard maps are designed by considering the interaction of several factors which, by their joint action may affect the equilibrium state of the natural slopes. The aim of this paper is landslides hazard assessment using the methodology provided by the Romanian national legislation and a very largely used statistical method. The final results of these two analyses are quantitative or semi-quantitative landslides hazard maps, created in geographic information system environment. The data base used for this purpose includes: geological and hydrogeological data, digital terrain model, hydrological data, land use, seismic action, anthropic action and an inventory of active landslides. The GIS landslides hazard models were built for the geographical area of the Iasi city, located in the north-east side of Romania.
Multi-hazard risk analysis related to hurricanes
Lin, Ning
Hurricanes present major hazards to the United States. Associated with extreme winds, heavy rainfall, and storm surge, landfalling hurricanes often cause enormous structural damage to coastal regions. Hurricane damage risk assessment provides the basis for loss mitigation and related policy-making. Current hurricane risk models, however, often oversimplify the complex processes of hurricane damage. This dissertation aims to improve existing hurricane risk assessment methodology by coherently modeling the spatial-temporal processes of storm landfall, hazards, and damage. Numerical modeling technologies are used to investigate the multiplicity of hazards associated with landfalling hurricanes. The application and effectiveness of current weather forecasting technologies to predict hurricane hazards is investigated. In particular, the Weather Research and Forecasting model (WRF), with Geophysical Fluid Dynamics Laboratory (GFDL)'s hurricane initialization scheme, is applied to the simulation of the wind and rainfall environment during hurricane landfall. The WRF model is further coupled with the Advanced Circulation (AD-CIRC) model to simulate storm surge in coastal regions. A case study examines the multiple hazards associated with Hurricane Isabel (2003). Also, a risk assessment methodology is developed to estimate the probability distribution of hurricane storm surge heights along the coast, particularly for data-scarce regions, such as New York City. This methodology makes use of relatively simple models, specifically a statistical/deterministic hurricane model and the Sea, Lake and Overland Surges from Hurricanes (SLOSH) model, to simulate large numbers of synthetic surge events, and conducts statistical analysis. The estimation of hurricane landfall probability and hazards are combined with structural vulnerability models to estimate hurricane damage risk. Wind-induced damage mechanisms are extensively studied. An innovative windborne debris risk model is
Wave Transformation Over Reefs: Evaluation of One-Dimensional Numerical Models
National Research Council Canada - National Science Library
Demirbilek, Zeki; Nwogu, Okey G; Ward, Donald L; Sanchez, Alejandro
2009-01-01
Three one-dimensional (1D) numerical wave models are evaluated for wave transformation over reefs and estimates of wave setup, runup, and ponding levels in an island setting where the beach is fronted by fringing reef and lagoons...
Application of differential transformation method for solving dengue transmission mathematical model
Ndii, Meksianis Z.; Anggriani, Nursanti; Supriatna, Asep K.
2018-03-01
The differential transformation method (DTM) is a semi-analytical numerical technique which depends on Taylor series and has application in many areas including Biomathematics. The aim of this paper is to employ the differential transformation method (DTM) to solve system of non-linear differential equations for dengue transmission mathematical model. Analytical and numerical solutions are determined and the results are compared to that of Runge-Kutta method. We found a good agreement between DTM and Runge-Kutta method.
Seismic hazard, risk, and design for South America
Petersen, Mark D.; Harmsen, Stephen; Jaiswal, Kishor; Rukstales, Kenneth S.; Luco, Nicolas; Haller, Kathleen; Mueller, Charles; Shumway, Allison
2018-01-01
We calculate seismic hazard, risk, and design criteria across South America using the latest data, models, and methods to support public officials, scientists, and engineers in earthquake risk mitigation efforts. Updated continental scale seismic hazard models are based on a new seismicity catalog, seismicity rate models, evaluation of earthquake sizes, fault geometry and rate parameters, and ground‐motion models. Resulting probabilistic seismic hazard maps show peak ground acceleration, modified Mercalli intensity, and spectral accelerations at 0.2 and 1 s periods for 2%, 10%, and 50% probabilities of exceedance in 50 yrs. Ground shaking soil amplification at each site is calculated by considering uniform soil that is applied in modern building codes or by applying site‐specific factors based on VS30">VS30 shear‐wave velocities determined through a simple topographic proxy technique. We use these hazard models in conjunction with the Prompt Assessment of Global Earthquakes for Response (PAGER) model to calculate economic and casualty risk. Risk is computed by incorporating the new hazard values amplified by soil, PAGER fragility/vulnerability equations, and LandScan 2012 estimates of population exposure. We also calculate building design values using the guidelines established in the building code provisions. Resulting hazard and associated risk is high along the northern and western coasts of South America, reaching damaging levels of ground shaking in Chile, western Argentina, western Bolivia, Peru, Ecuador, Colombia, Venezuela, and in localized areas distributed across the rest of the continent where historical earthquakes have occurred. Constructing buildings and other structures to account for strong shaking in these regions of high hazard and risk should mitigate losses and reduce casualties from effects of future earthquake strong ground shaking. National models should be developed by scientists and engineers in each country using the best
Celis, C.; Sepulveda, S. A.; Castruccio, A.; Lara, M.
2017-12-01
Debris and mudflows are some of the main geological hazards in the mountain foothills of Central Chile. The risk of flows triggered in the basins of ravines that drain the Andean frontal range into the capital city, Santiago, increases with time due to accelerated urban expansion. Susceptibility assessments were made by several authors to detect the main active ravines in the area. Macul and San Ramon ravines have a high to medium debris flow susceptibility, whereas Lo Cañas, Apoquindo and Las Vizcachas ravines have a medium to low debris flow susceptibility. This study emphasizes in delimiting the potential hazardous zones using the numerical simulation program RAMMS-Debris Flows with the Voellmy model approach, and the debris-flow model LAHARZ. This is carried out by back-calculating the frictional parameters in the depositional zone with a known event as the debris and mudflows in Macul and San Ramon ravines, on May 3rd, 1993, for the RAMMS approach. In the same scenario, we calibrate the coefficients to match conditions of the mountain foothills of Santiago for the LAHARZ model. We use the information obtained for every main ravine in the study area, mainly for the similarity in slopes and material transported. Simulations were made for the worst-case scenario, caused by the combination of intense rainfall storms, a high 0°C isotherm level and material availability in the basins where the flows are triggered. The results show that the runout distances are well simulated, therefore a debris-flow hazard map could be developed with these models. Correlation issues concerning the run-up, deposit thickness and transversal areas are reported. Hence, the models do not represent entirely the complexity of the phenomenon, but they are a reliable approximation for preliminary hazard maps.
Seismic hazard analysis. A methodology for the Eastern United States
Energy Technology Data Exchange (ETDEWEB)
Bernreuter, D L
1980-08-01
This report presents a probabilistic approach for estimating the seismic hazard in the Central and Eastern United States. The probabilistic model (Uniform Hazard Methodology) systematically incorporates the subjective opinion of several experts in the evaluation of seismic hazard. Subjective input, assumptions and associated hazard are kept separate for each expert so as to allow review and preserve diversity of opinion. The report is organized into five sections: Introduction, Methodology Comparison, Subjective Input, Uniform Hazard Methodology (UHM), and Uniform Hazard Spectrum. Section 2 Methodology Comparison, briefly describes the present approach and compares it with other available procedures. The remainder of the report focuses on the UHM. Specifically, Section 3 describes the elicitation of subjective input; Section 4 gives details of various mathematical models (earthquake source geometry, magnitude distribution, attenuation relationship) and how these models re combined to calculate seismic hazard. The lost section, Uniform Hazard Spectrum, highlights the main features of typical results. Specific results and sensitivity analyses are not presented in this report. (author)
Anselmetti, Flavio; Hilbe, Michael; Strupler, Michael; Baumgartner, Christoph; Bolz, Markus; Braschler, Urs; Eberli, Josef; Liniger, Markus; Scheiwiller, Peter; Strasser, Michael
2015-04-01
Due to their smaller dimensions and confined bathymetry, lakes act as model oceans that may be used as analogues for the much larger oceans and their margins. Numerous studies in the perialpine lakes of Central Europe have shown that their shores were repeatedly struck by several-meters-high tsunami waves, which were caused by subaquatic slides usually triggered by earthquake shaking. A profound knowledge of these hazards, their intensities and recurrence rates is needed in order to perform thorough tsunami-hazard assessment for the usually densely populated lake shores. In this context, we present results of a study combining i) basinwide slope-stability analysis of subaquatic sediment-charged slopes with ii) identification of scenarios for subaquatic slides triggered by seismic shaking, iii) forward modeling of resulting tsunami waves and iv) mapping of intensity of onshore inundation in populated areas. Sedimentological, stratigraphical and geotechnical knowledge of the potentially unstable sediment drape on the slopes is required for slope-stability assessment. Together with critical ground accelerations calculated from already failed slopes and paleoseismic recurrence rates, scenarios for subaquatic sediment slides are established. Following a previously used approach, the slides are modeled as a Bingham plastic on a 2D grid. The effect on the water column and wave propagation are simulated using the shallow-water equations (GeoClaw code), which also provide data for tsunami inundation, including flow depth, flow velocity and momentum as key variables. Combining these parameters leads to so called «intensity maps» for flooding that provide a link to the established hazard mapping framework, which so far does not include these phenomena. The current versions of these maps consider a 'worst case' deterministic earthquake scenario, however, similar maps can be calculated using probabilistic earthquake recurrence rates, which are expressed in variable amounts of
International Nuclear Information System (INIS)
Favarò, Francesca M.; Saleh, Joseph H.
2016-01-01
Probabilistic Risk Assessment (PRA) is a staple in the engineering risk community, and it has become to some extent synonymous with the entire quantitative risk assessment undertaking. Limitations of PRA continue to occupy researchers, and workarounds are often proposed. After a brief review of this literature, we propose to address some of PRA's limitations by developing a novel framework and analytical tools for model-based system safety, or safety supervisory control, to guide safety interventions and support a dynamic approach to risk assessment and accident prevention. Our work shifts the emphasis from the pervading probabilistic mindset in risk assessment toward the notions of danger indices and hazard temporal contingency. The framework and tools here developed are grounded in Control Theory and make use of the state-space formalism in modeling dynamical systems. We show that the use of state variables enables the definition of metrics for accident escalation, termed hazard levels or danger indices, which measure the “proximity” of the system state to adverse events, and we illustrate the development of such indices. Monitoring of the hazard levels provides diagnostic information to support both on-line and off-line safety interventions. For example, we show how the application of the proposed tools to a rejected takeoff scenario provides new insight to support pilots’ go/no-go decisions. Furthermore, we augment the traditional state-space equations with a hazard equation and use the latter to estimate the times at which critical thresholds for the hazard level are (b)reached. This estimation process provides important prognostic information and produces a proxy for a time-to-accident metric or advance notice for an impending adverse event. The ability to estimate these two hazard coordinates, danger index and time-to-accident, offers many possibilities for informing system control strategies and improving accident prevention and risk mitigation
Sun, Jianguo; Feng, Yanqin; Zhao, Hui
2015-01-01
Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.
A methodology for physically based rockfall hazard assessment
Directory of Open Access Journals (Sweden)
G. B. Crosta
2003-01-01
Full Text Available Rockfall hazard assessment is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. The mobility of rockfalls implies a more difficult hazard definition with respect to other slope instabilities with minimal runout. Rockfall hazard assessment involves complex definitions for "occurrence probability" and "intensity". This paper is an attempt to evaluate rockfall hazard using the results of 3-D numerical modelling on a topography described by a DEM. Maps portraying the maximum frequency of passages, velocity and height of blocks at each model cell, are easily combined in a GIS in order to produce physically based rockfall hazard maps. Different methods are suggested and discussed for rockfall hazard mapping at a regional and local scale both along linear features or within exposed areas. An objective approach based on three-dimensional matrixes providing both a positional "Rockfall Hazard Index" and a "Rockfall Hazard Vector" is presented. The opportunity of combining different parameters in the 3-D matrixes has been evaluated to better express the relative increase in hazard. Furthermore, the sensitivity of the hazard index with respect to the included variables and their combinations is preliminarily discussed in order to constrain as objective as possible assessment criteria.
Natural phenomena hazards project for Department of Energy sites
International Nuclear Information System (INIS)
Coats, D.W.
1985-01-01
Lawrence Livermore National Laboratory (LLNL) has developed seismic and wind hazard models for the Office of Nuclear Safety (ONS), Department of Energy (DOE). The work is part of a three-phase effort aimed at establishing uniform building design criteria for seismic and wind hazards at DOE sites throughout the United States. In Phase 1, LLNL gathered information on the sites and their critical facilities, including nuclear reactors, fuel-reprocessing plants, high-level waste storage and treatment facilities, and special nuclear material facilities. In Phase 2, development of seismic and wind hazard models, was initiated. These hazard models express the annual probability that the site will experience an earthquake or wind speed greater than some specified magnitude. In the final phase, it is anticipated that the DOE will use the hazard models to establish uniform criteria for the design and evaluation of critical facilities. 13 references, 2 figures, 1 table
Multimodal electromechanical model of piezoelectric transformers by Hamilton's principle.
Nadal, Clement; Pigache, Francois
2009-11-01
This work deals with a general energetic approach to establish an accurate electromechanical model of a piezoelectric transformer (PT). Hamilton's principle is used to obtain the equations of motion for free vibrations. The modal characteristics (mass, stiffness, primary and secondary electromechanical conversion factors) are also deduced. Then, to illustrate this general electromechanical method, the variational principle is applied to both homogeneous and nonhomogeneous Rosen-type PT models. A comparison of modal parameters, mechanical displacements, and electrical potentials are presented for both models. Finally, the validity of the electrodynamical model of nonhomogeneous Rosen-type PT is confirmed by a numerical comparison based on a finite elements method and an experimental identification.
Directory of Open Access Journals (Sweden)
Majeed Nauman
2017-12-01
Full Text Available Leadership and organizational citizenship behavior (OCB stayed at pinnacle in the arena of organizational behavior research since decades and has attained significant consideration of scholars pursuing to define multifaceted dynamics of leadership and their influence on follower’s behavior at work. The voluntary behavior of Organizational citizenship improves organizational effectiveness, and it goes beyond formal job duties. This study attempts to explore the association amongst transformational leadership and organizational citizenship behavior of teachers in public sector higher education institutions in Pakistan. Study of organizational citizenship behavior in educational organizations and academicians is of high value that definitely requires attention. This study examines the direct and indirect influence of transformational leadership through exploring the mediating role of emotional intelligence. The model was tested by employing structural equation modelling technique on survey responses collected from academicians. Results from 220 responses indicated that relationship between transformational leadership and Organizational Citizenship Behavior is statistically significant where Emotional Intelligence plays an important role as a mediator. The results support and add to the positive effects of transformational leadership style interconnected with extra role behavior at work making it more meaningful. The findings make a significant contribution to leadership and organizational behavior literature in higher education sector and propose that organizations should implement practices that help in enhancing the level of organizational citizenship behavior in organizations.
Earthquake Hazard and Risk in Alaska
Black Porto, N.; Nyst, M.
2014-12-01
Alaska is one of the most seismically active and tectonically diverse regions in the United States. To examine risk, we have updated the seismic hazard model in Alaska. The current RMS Alaska hazard model is based on the 2007 probabilistic seismic hazard maps for Alaska (Wesson et al., 2007; Boyd et al., 2007). The 2015 RMS model will update several key source parameters, including: extending the earthquake catalog, implementing a new set of crustal faults, updating the subduction zone geometry and reoccurrence rate. First, we extend the earthquake catalog to 2013; decluster the catalog, and compute new background rates. We then create a crustal fault model, based on the Alaska 2012 fault and fold database. This new model increased the number of crustal faults from ten in 2007, to 91 faults in the 2015 model. This includes the addition of: the western Denali, Cook Inlet folds near Anchorage, and thrust faults near Fairbanks. Previously the subduction zone was modeled at a uniform depth. In this update, we model the intraslab as a series of deep stepping events. We also use the best available data, such as Slab 1.0, to update the geometry of the subduction zone. The city of Anchorage represents 80% of the risk exposure in Alaska. In the 2007 model, the hazard in Alaska was dominated by the frequent rate of magnitude 7 to 8 events (Gutenberg-Richter distribution), and large magnitude 8+ events had a low reoccurrence rate (Characteristic) and therefore didn't contribute as highly to the overall risk. We will review these reoccurrence rates, and will present the results and impact to Anchorage. We will compare our hazard update to the 2007 USGS hazard map, and discuss the changes and drivers for these changes. Finally, we will examine the impact model changes have on Alaska earthquake risk. Consider risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of earthquake insurance (and premium levels), and the
Fabbri, Debora; Minella, Marco; Maurino, Valter; Minero, Claudio; Vione, Davide
2015-01-01
This work models the phototransformation kinetics in surface waters of five phenylurea herbicides (diuron, fenuron, isoproturon, metoxuron and chlortoluron), for which important photochemical parameters are available in the literature (direct photolysis quantum yields and reaction rate constants with ·OH, CO3(-·) and the triplet states of chromophoric dissolved organic matter, (3)CDOM*). Model calculations suggest that isoproturon and metoxuron would be the least photochemically persistent and diuron the most persistent compound. Reactions with ·OH and (3)CDOM* would be the main phototransformation pathways for all compounds in the majority of environmental conditions. Reaction with CO3(-) could be important in waters with low dissolved organic carbon (DOC), while direct photolysis would be negligible for fenuron, quite important for chlortoluron, and somewhat significant for the other compounds. The direct photolysis of metoxuron and diuron is known to increase toxicity, and such a photoreaction pathway would be enhanced at intermediate DOC values (1-4 mg C L(1)). The reaction between phenylureas and ·OH is known to produce toxic intermediates, differently from (3)CDOM*. Therefore, the shift of reactivity from ·OH to (3)CDOM* with increasing DOC could reduce the environmental impact of photochemical transformation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chemical Transformation Motifs --- Modelling Pathways as Integer Hyperflows
DEFF Research Database (Denmark)
Andersen, Jakob L.; Flamm, Christoph; Merkle, Daniel
2018-01-01
analysis are discussed in detail. To demonstrate the applicability of the mathematical framework to real-life problems we first explore the design space of possible non-oxidative glycolysis pathways and show that recent manually designed pathways can be further optimised. We then use a model of sugar...... chemistry to investigate pathways in the autocatalytic formose process. A graph transformation-based approach is used to automatically generate the reaction networks of interest....
White, Isabel; Liu, Taojun; Luco, Nicolas; Liel, Abbie
2017-01-01
The recent steep increase in seismicity rates in Oklahoma, southern Kansas, and other parts of the central United States led the U.S. Geological Survey (USGS) to develop, for the first time, a probabilistic seismic hazard forecast for one year (2016) that incorporates induced seismicity. In this study, we explore a process to ground‐truth the hazard model by comparing it with two databases of observations: modified Mercalli intensity (MMI) data from the “Did You Feel It?” (DYFI) system and peak ground acceleration (PGA) values from instrumental data. Because the 2016 hazard model was heavily based on earthquake catalogs from 2014 to 2015, this initial comparison utilized observations from these years. Annualized exceedance rates were calculated with the DYFI and instrumental data for direct comparison with the model. These comparisons required assessment of the options for converting hazard model results and instrumental data from PGA to MMI for comparison with the DYFI data. In addition, to account for known differences that affect the comparisons, the instrumental PGA and DYFI data were declustered, and the hazard model was adjusted for local site conditions. With these adjustments, examples at sites with the most data show reasonable agreement in the exceedance rates. However, the comparisons were complicated by the spatial and temporal completeness of the instrumental and DYFI observations. Furthermore, most of the DYFI responses are in the MMI II–IV range, whereas the hazard model is oriented toward forecasts at higher ground‐motion intensities, usually above about MMI IV. Nevertheless, the study demonstrates some of the issues that arise in making these comparisons, thereby informing future efforts to ground‐truth and improve hazard modeling for induced‐seismicity applications.
Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S
2017-05-30
We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A thick-interface model for diffusive and massive phase transformation in substitutional alloys
International Nuclear Information System (INIS)
Svoboda, J.; Vala, J.; Gamsjaeger, E.; Fischer, F.D.
2006-01-01
Based on the application of the thermodynamic extremal principle, a new model for the diffusive and massive phase transformation in multicomponent substitutional alloys is developed. Interfacial reactions such as the rearrangement of the lattice, solute drag and trans-interface diffusion are automatically considered by assigning a finite thickness and a finite mobility to the interface region. As an application of the steady-state solution of the derived evolution equations, the kinetics of the massive γ → α transformation in the Fe-rich Fe-Cr-Ni system is simulated. The thermodynamic properties of the interface may influence significantly the contact conditions at the interface as well as the conditions for the occurrence of the massive transformation and its kinetics. The model is also used for the simulation of the diffusion-induced grain boundary migration in the same system. By application of the model a realistic value for the Gibbs energy per unit interface area is obtained
An advanced model for spreading and evaporation of accidentally released hazardous liquids on land
Trijssenaar-Buhre, I.J.M.; Sterkenburg, R.P.; Wijnant-Timmerman, S.I.
2009-01-01
Pool evaporation modelling is an important element in consequence assessment of accidentally released hazardous liquids. The evaporation rate determines the amount of toxic or flammable gas released into the atmosphere and is an important factor for the size of a pool fire. In this paper a
An advanced model for spreading and evaporation of accidentally released hazardous liquids on land
Trijssenaar-Buhre, I.J.M.; Wijnant-Timmerman, S.L.
2008-01-01
Pool evaporation modelling is an important element in consequence assessment of accidentally released hazardous liquids. The evaporation rate determines the amount of toxic or flammable gas released into the atmosphere and is an important factor for the size of a pool fire. In this paper a
2015-04-01
HPD model. In an article on measuring HPD attenuation, Berger (1986) points out that Real Ear Attenuation at Threshold (REAT) tests are...men. Audiology . 1991;30:345–356. Fedele P, Binseel M, Kalb J, Price GR. Using the auditory hazard assessment algorithm for humans (AHAAH) with
Eastern US seismic hazard characterization update
International Nuclear Information System (INIS)
Savy, J.B.; Boissonnade, A.C.; Mensing, R.W.; Short, C.M.
1993-06-01
In January 1989, LLNL published the results of a multi-year project, funded by NRC, on estimating seismic hazard at nuclear plant sites east of the Rockies. The goal of this study was twofold: to develop a good central estimate (median) of the seismic hazard and to characterize the uncertainty in the estimates of this hazard. In 1989, LLNL was asked by DOE to develop site specific estimates of the seismic hazard at the Savannah River Site (SRS) in South Carolina as part of the New Production Reactor (NPR) project. For the purpose of the NPR, a complete review of the methodology and of the data acquisition process was performed. Work done under the NPR project has shown that first order improvement in the estimates of the uncertainty (i.e., lower mean hazard values) could be easily achieved by updating the modeling of the seismicity and ground motion attenuation uncertainty. To this effect, NRC sponsored LLNL to perform a reelicitation to update the seismicity and ground motion experts' inputs and to revise methods to combine seismicity and ground motion inputs in the seismic hazard analysis for nuclear power plant sites east of the Rocky Mountains. The objective of the recent study was to include the first order improvements that reflect the latest knowledge in seismicity and ground motion modeling and produce an update of all the hazard results produced in the 1989 study. In particular, it had been demonstrated that eliciting seismicity information in terms of rates of earthquakes rather than a- and b-values, and changing the elicitation format to a one-on-one interview, improved our ability to express the uncertainty of earthquake rates of occurrence at large magnitudes. Thus, NRC sponsored this update study to refine the model of uncertainty, and to re-elicitate of the experts' interpretations of the zonation and seismicity, as well as to reelicitate the ground motion models, based on current state of knowledge
García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.
2009-04-01
In recent years, interest in landslide hazard assessment studies has increased substantially. They are appropriate for evaluation and mitigation plan development in landslide-prone areas. There are several techniques available for landslide hazard research at a regional scale. Generally, they can be classified in two groups: qualitative and quantitative methods. Most of qualitative methods tend to be subjective, since they depend on expert opinions and represent hazard levels in descriptive terms. On the other hand, quantitative methods are objective and they are commonly used due to the correlation between the instability factors and the location of the landslides. Within this group, statistical approaches and new heuristic techniques based on artificial intelligence (artificial neural network (ANN), fuzzy logic, etc.) provide rigorous analysis to assess landslide hazard over large regions. However, they depend on qualitative and quantitative data, scale, types of movements and characteristic factors used. We analysed and compared an approach for assessing earthquake-triggered landslides hazard using logistic regression (LR) and artificial neural networks (ANN) with a back-propagation learning algorithm. One application has been developed in El Salvador, a country of Central America where the earthquake-triggered landslides are usual phenomena. In a first phase, we analysed the susceptibility and hazard associated to the seismic scenario of the 2001 January 13th earthquake. We calibrated the models using data from the landslide inventory for this scenario. These analyses require input variables representing physical parameters to contribute to the initiation of slope instability, for example, slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness, while the occurrence or non-occurrence of landslides is considered as dependent variable. The results of the landslide susceptibility analysis are checked using landslide
A transparent and data-driven global tectonic regionalization model for seismic hazard assessment
Chen, Yen-Shin; Weatherill, Graeme; Pagani, Marco; Cotton, Fabrice
2018-05-01
A key concept that is common to many assumptions inherent within seismic hazard assessment is that of tectonic similarity. This recognizes that certain regions of the globe may display similar geophysical characteristics, such as in the attenuation of seismic waves, the magnitude scaling properties of seismogenic sources or the seismic coupling of the lithosphere. Previous attempts at tectonic regionalization, particularly within a seismic hazard assessment context, have often been based on expert judgements; in most of these cases, the process for delineating tectonic regions is neither reproducible nor consistent from location to location. In this work, the regionalization process is implemented in a scheme that is reproducible, comprehensible from a geophysical rationale, and revisable when new relevant data are published. A spatial classification-scheme is developed based on fuzzy logic, enabling the quantification of concepts that are approximate rather than precise. Using the proposed methodology, we obtain a transparent and data-driven global tectonic regionalization model for seismic hazard applications as well as the subjective probabilities (e.g. degree of being active/degree of being cratonic) that indicate the degree to which a site belongs in a tectonic category.
Models for estimating the radiation hazards of uranium mines
International Nuclear Information System (INIS)
Wise, K.N.
1982-01-01
Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation or ingestion of uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined
Models for estimating the radiation hazards of uranium mines
International Nuclear Information System (INIS)
Wise, K.N.
1990-01-01
Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation of ingestion or uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined. 34 refs., 12 tabs., 9 figs
Rate-independent dissipation in phase-field modelling of displacive transformations
Tůma, K.; Stupkiewicz, S.; Petryk, H.
2018-05-01
In this paper, rate-independent dissipation is introduced into the phase-field framework for modelling of displacive transformations, such as martensitic phase transformation and twinning. The finite-strain phase-field model developed recently by the present authors is here extended beyond the limitations of purely viscous dissipation. The variational formulation, in which the evolution problem is formulated as a constrained minimization problem for a global rate-potential, is enhanced by including a mixed-type dissipation potential that combines viscous and rate-independent contributions. Effective computational treatment of the resulting incremental problem of non-smooth optimization is developed by employing the augmented Lagrangian method. It is demonstrated that a single Lagrange multiplier field suffices to handle the dissipation potential vertex and simultaneously to enforce physical constraints on the order parameter. In this way, the initially non-smooth problem of evolution is converted into a smooth stationarity problem. The model is implemented in a finite-element code and applied to solve two- and three-dimensional boundary value problems representative for shape memory alloys.
King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin
2011-01-01
Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.
Directory of Open Access Journals (Sweden)
Michael King
Full Text Available Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL for the development of hazardous drinking in safe drinkers.A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women.69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873. The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51. External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846 and Hedge's g of 0.68 (95% CI 0.57, 0.78.The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.
Continental Transform Boundaries: Tectonic Evolution and Geohazards
Directory of Open Access Journals (Sweden)
Michael Steckler
2012-04-01
Full Text Available Continental transform boundaries cross heavily populated regions, and they are associated with destructive earthquakes,for example, the North Anatolian Fault (NAFacross Turkey, the Enriquillo-Plantain Garden fault in Haiti,the San Andreas Fault in California, and the El Pilar fault in Venezuela. Transform basins are important because they are typically associated with 3-D fault geometries controlling segmentation—thus, the size and timing of damaging earthquakes—and because sediments record both deformation and earthquakes. Even though transform basins have been extensively studied, their evolution remains controversial because we don’t understand the specifics about coupling of vertical and horizontal motions and about the basins’long-term kinematics. Seismic and tsunami hazard assessments require knowing architecture and kinematics of faultsas well as how the faults are segmented.
Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform
Gato-Rivera, Beatriz
1992-01-01
A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.
Directory of Open Access Journals (Sweden)
Leah M. Courtland
2012-07-01
Full Text Available The Tephra2 numerical model for tephra fallout from explosive volcanic eruptions is specifically designed to enable students to probe ideas in model literacy, including code validation and verification, the role of simplifying assumptions, and the concepts of uncertainty and forecasting. This numerical model is implemented on the VHub.org website, a venture in cyberinfrastructure that brings together volcanological models and educational materials. The VHub.org resource provides students with the ability to explore and execute sophisticated numerical models like Tephra2. We present a strategy for using this model to introduce university students to key concepts in the use and evaluation of Tephra2 for probabilistic forecasting of volcanic hazards. Through this critical examination students are encouraged to develop a deeper understanding of the applicability and limitations of hazard models. Although the model and applications are intended for use in both introductory and advanced geoscience courses, they could easily be adapted to work in other disciplines, such as astronomy, physics, computational methods, data analysis, or computer science.
The 2018 and 2020 Updates of the U.S. National Seismic Hazard Models
Petersen, M. D.
2017-12-01
During 2018 the USGS will update the 2014 National Seismic Hazard Models by incorporating new seismicity models, ground motion models, site factors, fault inputs, and by improving weights to ground motion models using empirical and other data. We will update the earthquake catalog for the U.S. and introduce new rate models. Additional fault data will be used to improve rate estimates on active faults. New ground motion models (GMMs) and site factors for Vs30 have been released by the Pacific Earthquake Engineering Research Center (PEER) and we will consider these in assessing ground motions in craton and extended margin regions of the central and eastern U.S. The USGS will also include basin-depth terms for selected urban areas of the western United States to improve long-period shaking assessments using published depth estimates to 1.0 and 2.5 km/s shear wave velocities. We will produce hazard maps for input into the building codes that span a broad range of periods (0.1 to 5 s) and site classes (shear wave velocity from 2000 m/s to 200 m/s in the upper 30 m of the crust, Vs30). In the 2020 update we plan on including: a new national crustal model that defines basin depths required in the latest GMMs, new 3-D ground motion simulations for several urban areas, new magnitude-area equations, and new fault geodetic and geologic strain rate models. The USGS will also consider including new 3-D ground motion simulations for inclusion in these long-period maps. These new models are being evaluated and will be discussed at one or more regional and topical workshops held at the beginning of 2018.
Seismic hazard assessment of Iran
Directory of Open Access Journals (Sweden)
M. Ghafory-Ashtiany
1999-06-01
Full Text Available The development of the new seismic hazard map of Iran is based on probabilistic seismic hazard computation using the historical earthquakes data, geology, tectonics, fault activity and seismic source models in Iran. These maps have been prepared to indicate the earthquake hazard of Iran in the form of iso-acceleration contour lines, and seismic hazard zoning, by using current probabilistic procedures. They display the probabilistic estimates of Peak Ground Acceleration (PGA for the return periods of 75 and 475 years. The maps have been divided into intervals of 0.25 degrees in both latitudinal and longitudinal directions to calculate the peak ground acceleration values at each grid point and draw the seismic hazard curves. The results presented in this study will provide the basis for the preparation of seismic risk maps, the estimation of earthquake insurance premiums, and the preliminary site evaluation of critical facilities.
Tsunami hazard map in eastern Bali
International Nuclear Information System (INIS)
Afif, Haunan; Cipta, Athanasius
2015-01-01
Bali is a popular tourist destination both for Indonesian and foreign visitors. However, Bali is located close to the collision zone between the Indo-Australian Plate and Eurasian Plate in the south and back-arc thrust off the northern coast of Bali resulted Bali prone to earthquake and tsunami. Tsunami hazard map is needed for better understanding of hazard level in a particular area and tsunami modeling is one of the most reliable techniques to produce hazard map. Tsunami modeling conducted using TUNAMI N2 and set for two tsunami sources scenarios which are subduction zone in the south of Bali and back thrust in the north of Bali. Tsunami hazard zone is divided into 3 zones, the first is a high hazard zones with inundation height of more than 3m. The second is a moderate hazard zone with inundation height 1 to 3m and the third is a low tsunami hazard zones with tsunami inundation heights less than 1m. Those 2 scenarios showed southern region has a greater potential of tsunami impact than the northern areas. This is obviously shown in the distribution of the inundated area in the south of Bali including the island of Nusa Penida, Nusa Lembongan and Nusa Ceningan is wider than in the northern coast of Bali although the northern region of the Nusa Penida Island more inundated due to the coastal topography
Tsunami hazard map in eastern Bali
Energy Technology Data Exchange (ETDEWEB)
Afif, Haunan, E-mail: afif@vsi.esdm.go.id [Geological Agency, Bandung (Indonesia); Cipta, Athanasius [Geological Agency, Bandung (Indonesia); Australian National University, Canberra (Australia)
2015-04-24
Bali is a popular tourist destination both for Indonesian and foreign visitors. However, Bali is located close to the collision zone between the Indo-Australian Plate and Eurasian Plate in the south and back-arc thrust off the northern coast of Bali resulted Bali prone to earthquake and tsunami. Tsunami hazard map is needed for better understanding of hazard level in a particular area and tsunami modeling is one of the most reliable techniques to produce hazard map. Tsunami modeling conducted using TUNAMI N2 and set for two tsunami sources scenarios which are subduction zone in the south of Bali and back thrust in the north of Bali. Tsunami hazard zone is divided into 3 zones, the first is a high hazard zones with inundation height of more than 3m. The second is a moderate hazard zone with inundation height 1 to 3m and the third is a low tsunami hazard zones with tsunami inundation heights less than 1m. Those 2 scenarios showed southern region has a greater potential of tsunami impact than the northern areas. This is obviously shown in the distribution of the inundated area in the south of Bali including the island of Nusa Penida, Nusa Lembongan and Nusa Ceningan is wider than in the northern coast of Bali although the northern region of the Nusa Penida Island more inundated due to the coastal topography.
Tsunami hazard map in eastern Bali
Afif, Haunan; Cipta, Athanasius
2015-04-01
Bali is a popular tourist destination both for Indonesian and foreign visitors. However, Bali is located close to the collision zone between the Indo-Australian Plate and Eurasian Plate in the south and back-arc thrust off the northern coast of Bali resulted Bali prone to earthquake and tsunami. Tsunami hazard map is needed for better understanding of hazard level in a particular area and tsunami modeling is one of the most reliable techniques to produce hazard map. Tsunami modeling conducted using TUNAMI N2 and set for two tsunami sources scenarios which are subduction zone in the south of Bali and back thrust in the north of Bali. Tsunami hazard zone is divided into 3 zones, the first is a high hazard zones with inundation height of more than 3m. The second is a moderate hazard zone with inundation height 1 to 3m and the third is a low tsunami hazard zones with tsunami inundation heights less than 1m. Those 2 scenarios showed southern region has a greater potential of tsunami impact than the northern areas. This is obviously shown in the distribution of the inundated area in the south of Bali including the island of Nusa Penida, Nusa Lembongan and Nusa Ceningan is wider than in the northern coast of Bali although the northern region of the Nusa Penida Island more inundated due to the coastal topography.
Estimating hurricane hazards using a GIS system
Directory of Open Access Journals (Sweden)
A. Taramelli
2008-08-01
Full Text Available This paper develops a GIS-based integrated approach to the Multi-Hazard model method, with reference to hurricanes. This approach has three components: data integration, hazard assessment and score calculation to estimate elements at risk such as affected area and affected population. First, spatial data integration issues within a GIS environment, such as geographical scales and data models, are addressed. Particularly, the integration of physical parameters and population data is achieved linking remotely sensed data with a high resolution population distribution in GIS. In order to assess the number of affected people, involving heterogeneous data sources, the selection of spatial analysis units is basic. Second, specific multi-hazard tasks, such as hazard behaviour simulation and elements at risk assessment, are composed in order to understand complex hazard and provide support for decision making. Finally, the paper concludes that the integrated approach herein presented can be used to assist emergency management of hurricane consequences, in theory and in practice.
Use of agent-based modelling in emergency management under a range of flood hazards
Directory of Open Access Journals (Sweden)
Tagg Andrew
2016-01-01
Full Text Available The Life Safety Model (LSM was developed some 15 years ago, originally for dam break assessments and for informing reservoir evacuation and emergency plans. Alongside other technological developments, the model has evolved into a very useful agent-based tool, with many applications for a range of hazards and receptor behaviour. HR Wallingford became involved in its use in 2006, and is now responsible for its technical development and commercialisation. Over the past 10 years the model has been applied to a range of flood hazards, including coastal surge, river flood, dam failure and tsunami, and has been verified against historical events. Commercial software licences are being used in Canada, Italy, Malaysia and Australia. A core group of LSM users and analysts has been specifying and delivering a programme of model enhancements. These include improvements to traffic behaviour at intersections, new algorithms for sheltering in high-rise buildings, and the addition of monitoring points to allow detailed analysis of vehicle and pedestrian movement. Following user feedback, the ability of LSM to handle large model ‘worlds’ and hydrodynamic meshes has been improved. Recent developments include new documentation, performance enhancements, better logging of run-time events and bug fixes. This paper describes some of the recent developments and summarises some of the case study applications, including dam failure analysis in Japan and mass evacuation simulation in England.
DEFF Research Database (Denmark)
He, Peng; Eriksson, Frank; Scheike, Thomas H.
2016-01-01
function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...
Application of a Laplace transform pair model for high-energy x-ray spectral reconstruction.
Archer, B R; Almond, P R; Wagner, L K
1985-01-01
A Laplace transform pair model, previously shown to accurately reconstruct x-ray spectra at diagnostic energies, has been applied to megavoltage energy beams. The inverse Laplace transforms of 2-, 6-, and 25-MV attenuation curves were evaluated to determine the energy spectra of these beams. The 2-MV data indicate that the model can reliably reconstruct spectra in the low megavoltage range. Experimental limitations in acquiring the 6-MV transmission data demonstrate the sensitivity of the model to systematic experimental error. The 25-MV data result in a physically realistic approximation of the present spectrum.
Earthquake Hazard and Risk in New Zealand
Apel, E. V.; Nyst, M.; Fitzenz, D. D.; Molas, G.
2014-12-01
To quantify risk in New Zealand we examine the impact of updating the seismic hazard model. The previous RMS New Zealand hazard model is based on the 2002 probabilistic seismic hazard maps for New Zealand (Stirling et al., 2002). The 2015 RMS model, based on Stirling et al., (2012) will update several key source parameters. These updates include: implementation a new set of crustal faults including multi-segment ruptures, updating the subduction zone geometry and reccurrence rate and implementing new background rates and a robust methodology for modeling background earthquake sources. The number of crustal faults has increased by over 200 from the 2002 model, to the 2012 model which now includes over 500 individual fault sources. This includes the additions of many offshore faults in northern, east-central, and southwest regions. We also use the recent data to update the source geometry of the Hikurangi subduction zone (Wallace, 2009; Williams et al., 2013). We compare hazard changes in our updated model with those from the previous version. Changes between the two maps are discussed as well as the drivers for these changes. We examine the impact the hazard model changes have on New Zealand earthquake risk. Considered risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of earthquake insurance (and premium levels), and the loss exceedance probability curve used by insurers to address their solvency and manage their portfolio risk. We analyze risk profile changes in areas with large population density and for structures of economic and financial importance. New Zealand is interesting in that the city with the majority of the risk exposure in the country (Auckland) lies in the region of lowest hazard, where we don't have a lot of information about the location of faults and distributed seismicity is modeled by averaged Mw-frequency relationships on area sources. Thus small changes to the background rates
Hydrolysis and biotic transformation in water in the pesticide model
Horst, ter M.M.S.; Beltman, W.H.J.; Adriaanse, P.I.; Mulder, H.M.
2017-01-01
The TOXSWA model has been extended with the functionality to simulate hydrolysis and biotic transformation in water. TOXSWA simulates the fate of pesticides in water bodies to calculate exposure calculations for aquatic organisms or sediment-dwelling organisms as part of the aquatic risk assessment
Lu, Peng; Lin, Wenpeng; Niu, Zheng; Su, Yirong; Wu, Jinshui
2006-10-01
Nitrogen (N) is one of the main factors affecting environmental pollution. In recent years, non-point source pollution and water body eutrophication have become increasing concerns for both scientists and the policy-makers. In order to assess the environmental hazard of soil total N pollution, a typical ecological unit was selected as the experimental site. This paper showed that Box-Cox transformation achieved normality in the data set, and dampened the effect of outliers. The best theoretical model of soil total N was a Gaussian model. Spatial variability of soil total N at NE60° and NE150° directions showed that it had a strip anisotropic structure. The ordinary kriging estimate of soil total N concentration was mapped. The spatial distribution pattern of soil total N in the direction of NE150° displayed a strip-shaped structure. Kriging standard deviations (KSD) provided valuable information that will increase the accuracy of total N mapping. The probability kriging method is useful to assess the hazard of N pollution by providing the conditional probability of N concentration exceeding the threshold value, where we found soil total N>2.0g/kg. The probability distribution of soil total N will be helpful to conduct hazard assessment, optimal fertilization, and develop management practices to control the non-point sources of N pollution.
Hazards to nuclear plants from surface traffic accidents
International Nuclear Information System (INIS)
Hornyik, K.
1975-01-01
Analytic models have been developed for evaluating hazards to nuclear plants from hazardous-materials accidents in the vicinity of the plant. In particular, these models permit the evaluation of hazards from such accidents occurring on surface traffic routes near the plant. The analysis uses statistical information on accident rates, traffic frequency, and cargo-size distribution along with parameters describing properties of the hazardous cargo, plant design, and atmospheric conditions, to arrive at a conservative estimate of the annual probability of a catastrophic event. Two of the major effects associated with hazardous-materials accidents, explosion and release of toxic vapors, are treated by a common formalism which can be readily applied to any given case by means of a graphic procedure. As an example, for a typical case it is found that railroad shipments of chlorine in 55-ton tank cars constitute a greater hazard to a nearby nuclear plant than equally frequent rail shipments of explosives in amounts of 10 tons. 11 references. (U.S.)
Proportional hazards model with varying coefficients for length-biased data.
Zhang, Feipeng; Chen, Xuerong; Zhou, Yong
2014-01-01
Length-biased data arise in many important applications including epidemiological cohort studies, cancer prevention trials and studies of labor economics. Such data are also often subject to right censoring due to loss of follow-up or the end of study. In this paper, we consider a proportional hazards model with varying coefficients for right-censored and length-biased data, which is used to study the interact effect nonlinearly of covariates with an exposure variable. A local estimating equation method is proposed for the unknown coefficients and the intercept function in the model. The asymptotic properties of the proposed estimators are established by using the martingale theory and kernel smoothing techniques. Our simulation studies demonstrate that the proposed estimators have an excellent finite-sample performance. The Channing House data is analyzed to demonstrate the applications of the proposed method.
DEFF Research Database (Denmark)
Enzenhoefer, R.; Binning, Philip John; Nowak, W.
2015-01-01
Risk is often defined as the product of probability, vulnerability and value. Drinking water supply from groundwater abstraction is often at risk due to multiple hazardous land use activities in the well catchment. Each hazard might or might not introduce contaminants into the subsurface at any......-pathway-receptor concept, mass-discharge-based aggregation of stochastically occuring spill events, accounts for uncertainties in the involved flow and transport models through Monte Carlo simulation, and can address different stakeholder objectives. We illustrate the application of STORM in a numerical test case inspired...
Johnson, Branden B; Hallman, William K; Cuite, Cara L
2015-03-01
Perceptions of institutions that manage hazards are important because they can affect how the public responds to hazard events. Antecedents of trust judgments have received far more attention than antecedents of attributions of responsibility for hazard events. We build upon a model of retrospective attribution of responsibility to individuals to examine these relationships regarding five classes of institutions that bear responsibility for food safety: producers (e.g., farmers), processors (e.g., packaging firms), watchdogs (e.g., government agencies), sellers (e.g., supermarkets), and preparers (e.g., restaurants). A nationally representative sample of 1,200 American adults completed an Internet-based survey in which a hypothetical scenario involving contamination of diverse foods with Salmonella served as the stimulus event. Perceived competence and good intentions of the institution moderately decreased attributions of responsibility. A stronger factor was whether an institution was deemed (potentially) aware of the contamination and free to act to prevent or mitigate it. Responsibility was rated higher the more aware and free the institution. This initial model for attributions of responsibility to impersonal institutions (as opposed to individual responsibility) merits further development. © 2014 Society for Risk Analysis.
Bibliography - Existing Guidance for External Hazard Modelling
International Nuclear Information System (INIS)
Decker, Kurt
2015-01-01
The bibliography of deliverable D21.1 includes existing international and national guidance documents and standards on external hazard assessment together with a selection of recent scientific papers, which are regarded to provide useful information on the state of the art of external event modelling. The literature database is subdivided into International Standards, National Standards, and Science Papers. The deliverable is treated as a 'living document' which is regularly updated as necessary during the lifetime of ASAMPSA-E. The current content of the database is about 140 papers. Most of the articles are available as full-text versions in PDF format. The deliverable is available as an EndNote X4 database and as text files. The database includes the following information: Reference, Key words, Abstract (if available), PDF file of the original paper (if available), Notes (comments by the ASAMPSA-E consortium if available) The database is stored at the ASAMPSA-E FTP server hosted by IRSN. PDF files of original papers are accessible through the EndNote software
Clark, Renee M; Besterfield-Sacre, Mary E
2009-03-01
We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk.
Playing against nature: improving earthquake hazard mitigation
Stein, S. A.; Stein, J.
2012-12-01
The great 2011 Tohoku earthquake dramatically demonstrated the need to improve earthquake and tsunami hazard assessment and mitigation policies. The earthquake was much larger than predicted by hazard models, and the resulting tsunami overtopped coastal defenses, causing more than 15,000 deaths and $210 billion damage. Hence if and how such defenses should be rebuilt is a challenging question, because the defences fared poorly and building ones to withstand tsunamis as large as March's is too expensive,. A similar issue arises along the Nankai Trough to the south, where new estimates warning of tsunamis 2-5 times higher than in previous models raise the question of what to do, given that the timescale on which such events may occur is unknown. Thus in the words of economist H. Hori, "What should we do in face of uncertainty? Some say we should spend our resources on present problems instead of wasting them on things whose results are uncertain. Others say we should prepare for future unknown disasters precisely because they are uncertain". Thus society needs strategies to mitigate earthquake and tsunami hazards that make economic and societal sense, given that our ability to assess these hazards is poor, as illustrated by highly destructive earthquakes that often occur in areas predicted by hazard maps to be relatively safe. Conceptually, we are playing a game against nature "of which we still don't know all the rules" (Lomnitz, 1989). Nature chooses tsunami heights or ground shaking, and society selects the strategy to minimize the total costs of damage plus mitigation costs. As in any game of chance, we maximize our expectation value by selecting the best strategy, given our limited ability to estimate the occurrence and effects of future events. We thus outline a framework to find the optimal level of mitigation by balancing its cost against the expected damages, recognizing the uncertainties in the hazard estimates. This framework illustrates the role of the
Generating WS-SecurityPolicy documents via security model transformation
DEFF Research Database (Denmark)
Jensen, Meiko
2009-01-01
When SOA-based business processes are to be enhanced with security properties, the model-driven business process development approach enables an easier and more reliable security definition compared to manually crafting the security realizations afterwards. In this paper, we outline an appropriat...... security model definition and transformation approach, targeting the WS-SecurityPolicy and WS-BPEL specifications, in order to enable a Web-Service-based secure business process development.......When SOA-based business processes are to be enhanced with security properties, the model-driven business process development approach enables an easier and more reliable security definition compared to manually crafting the security realizations afterwards. In this paper, we outline an appropriate...
Directory of Open Access Journals (Sweden)
Yanlei Li
2015-01-01
Full Text Available This paper proposes a new method for predicting spindle deformation based on temperature data. The method introduces the adaptive neurofuzzy inference system (ANFIS, which is a neurofuzzy modeling approach that integrates the kernel and geometrical transformations. By utilizing data transformation, the number of ANFIS rules can be effectively reduced and the predictive model structure can be simplified. To build the predictive model, we first map the original temperature data to a feature space with Gaussian kernels. We then process the mapped data with the geometrical transformation and make the data gather in the square region. Finally, the transformed data are used as input to train the ANFIS. A verification experiment is conducted to evaluate the performance of the proposed method. Six Pt100 thermal resistances are used to monitor the spindle temperature, and a laser displacement sensor is used to detect the spindle deformation. Experimental results show that the proposed method can precisely predict the spindle deformation and greatly improve the thermal performance of the spindle. Compared with back propagation (BP networks, the proposed method is more suitable for complex working conditions in practical applications.
Directory of Open Access Journals (Sweden)
Abdul Salam Soomro
2012-10-01
Full Text Available The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region.
Tsunami hazard preventing based land use planing model using GIS technique in Muang Krabi, Thailand
International Nuclear Information System (INIS)
Soormo, A.S.
2012-01-01
The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems) based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region. (author)
International Nuclear Information System (INIS)
Agarwal, Vivek; Lybeck, Nancy J.; Pham, Binh; Rusaw, Richard; Bickford, Randall
2015-01-01
Research and development efforts are required to address aging and reliability concerns of the existing fleet of nuclear power plants. As most plants continue to operate beyond the license life (i.e., towards 60 or 80 years), plant components are more likely to incur age-related degradation mechanisms. To assess and manage the health of aging plant assets across the nuclear industry, the Electric Power Research Institute has developed a web-based Fleet-Wide Prognostic and Health Management (FW-PHM) Suite for diagnosis and prognosis. FW-PHM is a set of web-based diagnostic and prognostic tools and databases, comprised of the Diagnostic Advisor, the Asset Fault Signature Database, the Remaining Useful Life Advisor, and the Remaining Useful Life Database, that serves as an integrated health monitoring architecture. The main focus of this paper is the implementation of prognostic models for generator step-up transformers in the FW-PHM Suite. One prognostic model discussed is based on the functional relationship between degree of polymerization, (the most commonly used metrics to assess the health of the winding insulation in a transformer) and furfural concentration in the insulating oil. The other model is based on thermal-induced degradation of the transformer insulation. By utilizing transformer loading information, established thermal models are used to estimate the hot spot temperature inside the transformer winding. Both models are implemented in the Remaining Useful Life Database of the FW-PHM Suite. The Remaining Useful Life Advisor utilizes the implemented prognostic models to estimate the remaining useful life of the paper winding insulation in the transformer based on actual oil testing and operational data.
Stochastic modeling of a hazard detection and avoidance maneuver—The planetary landing case
International Nuclear Information System (INIS)
Witte, Lars
2013-01-01
Hazard Detection and Avoidance (HDA) functionalities, thus the ability to recognize and avoid potential hazardous terrain features, is regarded as an enabling technology for upcoming robotic planetary landing missions. In the forefront of any landing mission the landing site safety assessment is an important task in the systems and mission engineering process. To contribute to this task, this paper presents a mathematical framework to consider the HDA strategy and system constraints in this mission engineering aspect. Therefore the HDA maneuver is modeled as a stochastic decision process based on Markov chains to map an initial dispersion at an arrival gate to a new dispersion pattern affected by the divert decision-making and system constraints. The implications for an efficient numerical implementation are addressed. An example case study is given to demonstrate the implementation and use of the proposed scheme
Transforming PLC Programs into Formal Models for Verification Purposes
Darvas, D; Blanco, E
2013-01-01
Most of CERN’s industrial installations rely on PLC-based (Programmable Logic Controller) control systems developed using the UNICOS framework. This framework contains common, reusable program modules and their correctness is a high priority. Testing is already applied to find errors, but this method has limitations. In this work an approach is proposed to transform automatically PLC programs into formal models, with the goal of applying formal verification to ensure their correctness. We target model checking which is a precise, mathematical-based method to check formalized requirements automatically against the system.
Dynamic modeling and simulation of power transformer maintenance costs
Directory of Open Access Journals (Sweden)
Ristić Olga
2016-01-01
Full Text Available The paper presents the dynamic model of maintenance costs of the power transformer functional components. Reliability is modeled combining the exponential and Weibull's distribution. The simulation was performed with the aim of corrective maintenance and installation of the continuous monitoring system of the most critical components. Simulation Dynamic System (SDS method and VENSIM PLE software was used to simulate the cost. In this way, significant savings in maintenance costs will be achieved with a small initial investment. [Projekat Ministarstva nauke Republike Srbije, br. III 41025 i br. OI 171007
Directory of Open Access Journals (Sweden)
P. Horton
2013-04-01
Full Text Available The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM. The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time
Model-free approach to the estimation of radiation hazards. I. Theory
International Nuclear Information System (INIS)
Zaider, M.; Brenner, D.J.
1986-01-01
The experience of the Japanese atomic bomb survivors constitutes to date the major data base for evaluating the effects of low doses of ionizing radiation on human populations. Although numerous analyses have been performed and published concerning this experience, it is clear that no consensus has emerged as to the conclusions that may be drawn to assist in setting realistic radiation protection guidelines. In part this is an inherent consequences of the rather limited amount of data available. In this paper the authors address an equally important problem; namely, the use of arbitrary parametric risk models which have little theoretical foundation, yet almost totally determine the final conclusions drawn. They propose the use of a model-free approach to the estimation of radiation hazards
Energy Technology Data Exchange (ETDEWEB)
Dudek, K; Glowacki, M; Pietrzyk, M [Akademia Gorniczo-Hutnicza, Cracow (Poland)
1999-07-01
Numerical model describing stresses arising during phase transformations in steels products is presented. The full model consists of three components. The first component uses finite element solution of Fourier equation for an evaluation of the temperature field inside the sample. The second component predicts kinetics of phase transformation occurring during cooling of steel products. Coupling of these two components allows prediction of structure and properties of final products at room temperature. The third component uses elastic-plastic finite element model for prediction of stresses caused by non-uniform temperatures and by changes of volume during transformations. Typical results of simulations performed for cooling of rails after hot rolling are presented. (author)
DEFF Research Database (Denmark)
Ramin, Pedram; Valverde Pérez, Borja; Polesel, Fabio
2017-01-01
This study presents a novel statistical approach for identifying sequenced chemical transformation pathways in combination with reaction kinetics models. The proposed method relies on sound uncertainty propagation by considering parameter ranges and associated probability distribution obtained...... at any given transformation pathway levels as priors for parameter estimation at any subsequent transformation levels. The method was applied to calibrate a model predicting the transformation in untreated wastewater of six biomarkers, excreted following human metabolism of heroin and codeine. The method....... Results obtained suggest that the method developed has the potential to outperform conventional approaches in terms of prediction accuracy, transformation pathway identification and parameter identifiability. This method can be used in conjunction with optimal experimental designs to effectively identify...
Energy Technology Data Exchange (ETDEWEB)
Nguyen, Anh-Tuan; Kang, Jeong-Ki; Kim, Woo-Sik [Department of Chemical Engineering, Kyung Hee University, Seocheon-Dong, Giheung-Gu, 446-701 Yongin-Si (Korea, Republic of); Choi, Guang Jin [Department of Pharmaceutical Engineering, Inje University, 607 Uhbang-Dong, Gimhae, 621-746 Kyungnam (Korea, Republic of)
2011-01-15
The phase transformation of Guanosine 5{sup '}-Monophousphate (GMP) in drowning-out crystallization using a batch system was experimentally monitored and mathematically modeled. The solid (amorphous and crystalline GMP hydrate) and liquid phases of the GMP products were simultaneously monitored using a video microscope, FT-IR, and UV/Vis spectroscopy during the phase transformation. For the modeling, the phase transformation was assumed to occur via the simultaneous dissolution of amorphous GMP and growth of crystalline GMP hydrate in the solution. Based on a comparison of the experimental results and model predictions, both the dissolution and growth of the GMP solids were found to contribute competitively to the phase transformation. When varying the crystallization conditions, in this case the agitation speed and feed concentration, the phase transformation was significantly promoted when increasing the agitation speed, yet independent of the feed concentration. The simple mathematical model used for the GMP phase transformation was quite successful in describing the experimental results. (copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Farajzadeh, Manuchehr; Egbal, Mahbobeh Nik
2007-08-15
In this study, the MEDALUS model along with GIS mapping techniques are used to determine desertification hazards for a province of Iran to determine the desertification hazard. After creating a desertification database including 20 parameters, the first steps consisted of developing maps of four indices for the MEDALUS model including climate, soil, vegetation and land use were prepared. Since these parameters have mostly been presented for the Mediterranean region in the past, the next step included the addition of other indicators such as ground water and wind erosion. Then all of the layers weighted by environmental conditions present in the area were used (following the same MEDALUS framework) before a desertification map was prepared. The comparison of two maps based on the original and modified MEDALUS models indicates that the addition of more regionally-specific parameters into the model allows for a more accurate representation of desertification processes across the Iyzad Khast plain. The major factors affecting desertification in the area are climate, wind erosion and low land quality management, vegetation degradation and the salinization of soil and water resources.
Study of Error Propagation in the Transformations of Dynamic Thermal Models of Buildings
Directory of Open Access Journals (Sweden)
Loïc Raillon
2017-01-01
Full Text Available Dynamic behaviour of a system may be described by models with different forms: thermal (RC networks, state-space representations, transfer functions, and ARX models. These models, which describe the same process, are used in the design, simulation, optimal predictive control, parameter identification, fault detection and diagnosis, and so on. Since more forms are available, it is interesting to know which one is the most suitable by estimating the sensitivity of the model to transform into a physical model, which is represented by a thermal network. A procedure for the study of error by Monte Carlo simulation and of factor prioritization is exemplified on a simple, but representative, thermal model of a building. The analysis of the propagation of errors and of the influence of the errors on the parameter estimation shows that the transformation from state-space representation to transfer function is more robust than the other way around. Therefore, if only one model is chosen, the state-space representation is preferable.
Assessment of Meteorological Drought Hazard Area using GIS in ...
African Journals Online (AJOL)
Michael Horsfall
The purpose of this study was to make a model of the meteorological drought hazard area using GIS. ... overlaying different hazard indicator maps in the GIS, deploying the new model. The final ..... Northeast Thailand Project Bangkok. Min. of.
Socio-economic vulnerability to natural hazards - proposal for an indicator-based model
Eidsvig, U.; McLean, A.; Vangelsten, B. V.; Kalsnes, B.; Ciurean, R. L.; Argyroudis, S.; Winter, M.; Corominas, J.; Mavrouli, O. C.; Fotopoulou, S.; Pitilakis, K.; Baills, A.; Malet, J. P.
2012-04-01
Vulnerability assessment, with respect to natural hazards, is a complex process that must consider multiple dimensions of vulnerability, including both physical and social factors. Physical vulnerability refers to conditions of physical assets, and may be modeled by the intensity and magnitude of the hazard, the degree of physical protection provided by the natural and built environment, and the physical robustness of the exposed elements. Social vulnerability refers to the underlying factors leading to the inability of people, organizations, and societies to withstand impacts from the natural hazards. Social vulnerability models can be used in combination with physical vulnerability models to estimate both direct losses, i.e. losses that occur during and immediately after the impact, as well as indirect losses, i.e. long-term effects of the event. Direct impact of a landslide typically includes casualties and damages to buildings and infrastructure while indirect losses may e.g. include business closures or limitations in public services. The direct losses are often assessed using physical vulnerability indicators (e.g. construction material, height of buildings), while indirect losses are mainly assessed using social indicators (e.g. economical resources, demographic conditions). Within the EC-FP7 SafeLand research project, an indicator-based method was proposed to assess relative socio-economic vulnerability to landslides. The indicators represent the underlying factors which influence a community's ability to prepare for, deal with, and recover from the damage associated with landslides. The proposed model includes indicators representing demographic, economic and social characteristics as well as indicators representing the degree of preparedness and recovery capacity. Although the model focuses primarily on the indirect losses, it could easily be extended to include more physical indicators which account for the direct losses. Each indicator is individually
International Nuclear Information System (INIS)
Mendez, W.M. Jr.
1990-01-01
Remediation of hazardous an mixed waste sites is often driven by assessments of human health risks posed by the exposures to hazardous substances released from these sites. The methods used to assess potential health risk involve, either implicitly or explicitly, models for pollutant releases, transport, human exposure and intake, and for characterizing health effects. Because knowledge about pollutant fate transport processes at most waste sites is quite limited, and data cost are quite high, most of the models currently used to assess risk, and endorsed by regulatory agencies, are quite simple. The models employ many simplifying assumptions about pollutant fate and distribution in the environment about human pollutant intake, and toxicologic responses to pollutant exposures. An important consequence of data scarcity and model simplification is that risk estimates are quite uncertain and estimates of the magnitude uncertainty associated with risk assessment has been very difficult. A number of methods have been developed to address the issue of uncertainty in risk assessments in a manner that realistically reflects uncertainty in model specification and data limitations. These methods include definition of multiple exposure scenarios, sensitivity analyses, and explicit probabilistic modeling of uncertainty. Recent developments in this area will be discussed, along with their possible impacts on remediation programs, and remaining obstacles to their wider use and acceptance by the scientific and regulatory communities
Directory of Open Access Journals (Sweden)
Islam Abou El-Magd
2010-06-01
Full Text Available In the mountainous area of the Red Sea region in southeastern Egypt, the development of new mining activities or/and domestic infrastructures require reliable and accurate information about natural hazards particularly flash flood. This paper presents the assessment of flash flood hazards in the Abu Dabbab drainage basin. Remotely sensed data were used to delineate the alluvial active channels, which were integrated with morphometric parameters extracted from digital elevation models (DEM into geographical information systems (GIS to construct a hydrological model that provides estimates about the amount of surface runoff as well as the magnitude of flash floods. The peak discharge is randomly varied at different cross-sections along the main channel. Under consistent 10 mm rainfall event, the selected cross-section in middle of the main channel is prone to maximum water depth at 80 cm, which decreases to nearly 30 cm at the outlet due to transmission loss. The estimation of spatial variability of flow parameters within the catchment at different confluences of the constituting sub-catchments can be considered and used in planning for engineering foundations and linear infrastructures with the least flash flood hazard. Such information would, indeed, help decision makers and planning to minimize such hazards.
Modeling fault rupture hazard for the proposed repository at Yucca Mountain, Nevada
International Nuclear Information System (INIS)
Coppersmith, K.J.; Youngs, R.R.
1992-01-01
In this paper as part of the Electric Power Research Institute's High Level Waste program, the authors have developed a preliminary probabilistic model for assessing the hazard of fault rupture to the proposed high level waste repository at Yucca Mountain. The model is composed of two parts: the earthquake occurrence model that describes the three-dimensional geometry of earthquake sources and the earthquake recurrence characteristics for all sources in the site vicinity; and the rupture model that describes the probability of coseismic fault rupture of various lengths and amounts of displacement within the repository horizon 350 m below the surface. The latter uses empirical data from normal-faulting earthquakes to relate the rupture dimensions and fault displacement amounts to the magnitude of the earthquake. using a simulation procedure, we allow for earthquake occurrence on all of the earthquake sources in the site vicinity, model the location and displacement due to primary faults, and model the occurrence of secondary faulting in conjunction with primary faulting
Day-ahead electricity price forecasting using wavelet transform combined with ARIMA and GARCH models
International Nuclear Information System (INIS)
Tan, Zhongfu; Zhang, Jinliang; Xu, Jun; Wang, Jianhui
2010-01-01
This paper proposes a novel price forecasting method based on wavelet transform combined with ARIMA and GARCH models. By wavelet transform, the historical price series is decomposed and reconstructed into one approximation series and some detail series. Then each subseries can be separately predicted by a suitable time series model. The final forecast is obtained by composing the forecasted results of each subseries. This proposed method is examined on Spanish and PJM electricity markets and compared with some other forecasting methods. (author)
Model Transformation for a System of Systems Dependability Safety Case
Murphy, Judy; Driskell, Steve
2011-01-01
The presentation reviews the dependability and safety effort of NASA's Independent Verification and Validation Facility. Topics include: safety engineering process, applications to non-space environment, Phase I overview, process creation, sample SRM artifact, Phase I end result, Phase II model transformation, fault management, and applying Phase II to individual projects.
A Concept Transformation Learning Model for Architectural Design Learning Process
Wu, Yun-Wu; Weng, Kuo-Hua; Young, Li-Ming
2016-01-01
Generally, in the foundation course of architectural design, much emphasis is placed on teaching of the basic design skills without focusing on teaching students to apply the basic design concepts in their architectural designs or promoting students' own creativity. Therefore, this study aims to propose a concept transformation learning model to…
Modeling the bathtub shape hazard rate function in terms of reliability
International Nuclear Information System (INIS)
Wang, K.S.; Hsu, F.S.; Liu, P.P.
2002-01-01
In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man-machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1-R. This representation denotes the memory characteristics of the second failure cause. Man-machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted 'bathtub' procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons
idSpace D2.3 – Semantic meta-model integration and transformations v2
DEFF Research Database (Denmark)
Dolog, Peter; Grube, Pascal; Schmid, Klaus
2009-01-01
This deliverable discusses an extended set of requirements for transformations and metamodel for creativity techniques. Based on the requirements, the deliverable provides refined meta-model. The metamodel allows for more advanced transforma-tion concepts besides the previously delivered graph tr...... oriented implemen-tation with portlets and widgets in the Liferay portal....
International Nuclear Information System (INIS)
Carrander, Claes; Mousavi, Seyed Ali; Engdahl, Göran
2017-01-01
In many transformer applications, it is necessary to have a core magnetization model that takes into account both magnetic and electrical effects. This becomes particularly important in three-phase transformers, where the zero-sequence impedance is generally high, and therefore affects the magnetization very strongly. In this paper, we demonstrate a time-step topological simulation method that uses a lumped-element approach to accurately model both the electrical and magnetic circuits. The simulation method is independent of the used hysteresis model. In this paper, a hysteresis model based on the first-order reversal-curve has been used. - Highlights: • A lumped-element method for modelling transformers i demonstrated. • The method can include hysteresis and arbitrarily complex geometries. • Simulation results for one power transformer are compared to measurements. • An analytical curve-fitting expression for static hysteresis loops is shown.
On a model-based approach to radiation protection
International Nuclear Information System (INIS)
Waligorski, M.P.R.
2002-01-01
There is a preoccupation with linearity and absorbed dose as the basic quantifiers of radiation hazard. An alternative is the fluence approach, whereby radiation hazard may be evaluated, at least in principle, via an appropriate action cross section. In order to compare these approaches, it may be useful to discuss them as quantitative descriptors of survival and transformation-like endpoints in cell cultures in vitro - a system thought to be relevant to modelling radiation hazard. If absorbed dose is used to quantify these biological endpoints, then non-linear dose-effect relations have to be described, and, e.g. after doses of densely ionising radiation, dose-correction factors as high as 20 are required. In the fluence approach only exponential effect-fluence relationships can be readily described. Neither approach alone exhausts the scope of experimentally observed dependencies of effect on dose or fluence. Two-component models, incorporating a suitable mixture of the two approaches, are required. An example of such a model is the cellular track structure theory developed by Katz over thirty years ago. The practical consequences of modelling radiation hazard using this mixed two-component approach are discussed. (author)
Thiel, Rainer; Viceconti, Marco; Stroetmann, Karl
2011-01-01
Biocomputational modelling as developed by the European Virtual Physiological Human (VPH) Initiative is the area of ICT most likely to revolutionise in the longer term the practice of medicine. Using the example of osteoporosis management, a socio-economic assessment framework is presented that captures how the transformation of clinical guidelines through VPH models can be evaluated. Applied to the Osteoporotic Virtual Physiological Human Project, a consequent benefit-cost analysis delivers promising results, both methodologically and substantially.
Linear models for assessing mechanisms of sperm competition: the trouble with transformations.
Eggert, Anne-Katrin; Reinhardt, Klaus; Sakaluk, Scott K
2003-01-01
Although sperm competition is a pervasive selective force shaping the reproductive tactics of males, the mechanisms underlying different patterns of sperm precedence remain obscure. Parker et al. (1990) developed a series of linear models designed to identify two of the more basic mechanisms: sperm lotteries and sperm displacement; the models can be tested experimentally by manipulating the relative numbers of sperm transferred by rival males and determining the paternity of offspring. Here we show that tests of the model derived for sperm lotteries can result in misleading inferences about the underlying mechanism of sperm precedence because the required inverse transformations may lead to a violation of fundamental assumptions of linear regression. We show that this problem can be remedied by reformulating the model using the actual numbers of offspring sired by each male, and log-transforming both sides of the resultant equation. Reassessment of data from a previous study (Sakaluk and Eggert 1996) using the corrected version of the model revealed that we should not have excluded a simple sperm lottery as a possible mechanism of sperm competition in decorated crickets, Gryllodes sigillatus.
DEFF Research Database (Denmark)
Nielsen, Jan; Parner, Erik
2010-01-01
In this paper, we model multivariate time-to-event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right- and interval-censored data are considered. The suggested approach is applied on two types of family studies using the gamma...
Investigation of lithium thionyl chloride battery safety hazards
McDonald, R. C.; Dampier, F. W.; Wang, P.; Bennett, J. M.
1983-01-01
The chemistry of discharge and overdischarge in Li/SOCl2 cells has been examined with Raman emission, Fourier transform infrared, and electron spin resonance spectroscopies to determine if any hazardous reactions can occur. Under moderate discharge rate at room temperature, the electrolyte from discharged and cathode limited overdischarged cells contains primarily LiAlCl4.3 SO2, LiAlCl.2 SOCl2, and perhaps LiAlCl4.SOCl2.SO2; traces of SO3 are indicated. Three free radicals are present at low concentrations on discharge and cathode limited overdischarged with two additional radicals appearing on extended anode limited overdischarge. At least one of these is cationic polymeric sulfur. Both FTIR and ESR suggest intermediates exist with lifetimes on the order of days from discharge and overcharge. No hazardous reactions were observed at anytime. Pressure from SO2, a principal result of discharge, remains low due to the LiAlCl4.3 SO2, complex in solution. Scanning electron and optical microscopic investigations lithium dendrite structure. Individual dendrites do not grow any longer than about 50 microns or any thicker that about four microns in diameter before branching at random angles. The extent of dendritic growth and the fate of the dentrites depends on the discharge conditions. No overcharged hazards were encountered in this study though several hazard scenarios suggested themselves.
International Nuclear Information System (INIS)
Berge-Thierry, C.
2007-05-01
The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)
Transportation of Hazardous Materials Emergency Preparedness Hazards Assessment
Energy Technology Data Exchange (ETDEWEB)
Blanchard, A.
2000-02-28
This report documents the Emergency Preparedness Hazards Assessment (EPHA) for the Transportation of Hazardous Materials (THM) at the Department of Energy (DOE) Savannah River Site (SRS). This hazards assessment is intended to identify and analyze those transportation hazards significant enough to warrant consideration in the SRS Emergency Management Program.
Transportation of Hazardous Materials Emergency Preparedness Hazards Assessment
International Nuclear Information System (INIS)
Blanchard, A.
2000-01-01
This report documents the Emergency Preparedness Hazards Assessment (EPHA) for the Transportation of Hazardous Materials (THM) at the Department of Energy (DOE) Savannah River Site (SRS). This hazards assessment is intended to identify and analyze those transportation hazards significant enough to warrant consideration in the SRS Emergency Management Program
Transportation of hazardous materials emergency preparedness hazards assessment
International Nuclear Information System (INIS)
Blanchard, A.
2000-01-01
This report documents the Emergency Preparedness Hazards Assessment (EPHA) for the Transportation of Hazardous Materials (THM) at the Department of Energy (DOE) Savannah River Site (SRS). This hazards assessment is intended to identify and analyze those transportation hazards significant enough to warrant consideration in the SRS Emergency Management Program
Hermann, Frank; Ehrig, Hartmut; Orejas, Fernando; Ulrike, Golas
2010-01-01
Triple Graph Grammars (TGGs) are a well-established concept for the specification of model transformations. In previous work we have formalized and analyzed already crucial properties of model transformations like termination, correctness and completeness, but functional behaviour - especially local confluence - is missing up to now. In order to close this gap we generate forward translation rules, which extend standard forward rules by translation attributes keeping track of the elements whi...
Mullens, E.; Mcpherson, R. A.
2016-12-01
This work develops detailed trends in climate hazards affecting the Department of Transportation's Region 6, in the South Central U.S. Firstly, a survey was developed to gather information regarding weather and climate hazards in the region from the transportation community, identifying key phenomena and thresholds to evaluate. Statistically downscaled datasets were obtained from the Multivariate Adaptive Constructed Analogues (MACA) project, and the Asynchronous Regional Regression Model (ARRM), for a total of 21 model projections, two coupled model intercomparisons (CMIP3, and CMIP5), and four emissions pathways (A1Fi, B1, RCP8.5, RCP4.5). Specific hazards investigated include winter weather, freeze-thaw cycles, hot and cold extremes, and heavy precipitation. Projections for each of these variables were calculated for the region, utilizing spatial mapping, and time series analysis at the climate division level. The results indicate that cold-season phenomena such as winter weather, freeze-thaw, and cold extremes, decrease in intensity and frequency, particularly with the higher emissions pathways. Nonetheless, specific model and downscaling method yields variability in magnitudes, with the most notable decreasing trends late in the 21st century. Hot days show a pronounced increase, particularly with greater emissions, producing annual mean 100oF day frequencies by late 21st century analogous to the 2011 heatwave over the central Southern Plains. Heavy precipitation, evidenced by return period estimates and counts-over-thresholds, also show notable increasing trends, particularly between the recent past through mid-21st Century. Conversely, mean precipitation does not show significant trends and is regionally variable. Precipitation hazards (e.g., winter weather, extremes) diverge between downscaling methods and their associated model samples much more substantially than temperature, suggesting that the choice of global model and downscaled data is particularly
Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach
Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.
Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.
Directory of Open Access Journals (Sweden)
Solomencevs Artūrs
2016-05-01
Full Text Available The approach called “Topological Functioning Model for Software Engineering” (TFM4SE applies the Topological Functioning Model (TFM for modelling the business system in the context of Model Driven Architecture. TFM is a mathematically formal computation independent model (CIM. TFM4SE is compared to an approach that uses BPMN as a CIM. The comparison focuses on CIM modelling and on transformation to UML Sequence diagram on the platform independent (PIM level. The results show the advantages and drawbacks the formalism of TFM brings into the development.
Czech Academy of Sciences Publication Activity Database
Svoboda, Jiří; Gamsjäger, E.
2011-01-01
Roč. 102, č. 6 (2011), s. 666-673 ISSN 1862-5282 R&D Projects: GA MŠk(CZ) OC10029 Institutional research plan: CEZ:AV0Z20410507 Keywords : modelling * phase transformation * ediffusion Subject RIV: BJ - Thermodynamics Impact factor: 0.830, year: 2011
Sadegh, M.; Vrugt, J. A.
2011-12-01
In the past few years, several contributions have begun to appear in the hydrologic literature that introduced and analyzed the benefits of using a signature based approach to watershed analysis. This signature-based approach abandons the standard single criteria model-data fitting paradigm in favor of a diagnostic approach that better extracts the available information from the available data. Despite the prospects of this new viewpoint, rather ad-hoc criteria have hitherto been proposed to improve watershed modeling. Here, we aim to provide a proper mathematical foundation to signature based analysis. We analyze the information content of different data transformation by analyzing their convergence speed with Markov Chain Monte Carlo (MCMC) simulation using the Generalized Likelihood function of Schousp and Vrugt (2010). We compare the information content of the original discharge data against a simple square root and Box-Cox transformation of the streamflow data. We benchmark these results against wavelet and flow duration curve transformations that temporally disaggregate the discharge data. Our results conclusive demonstrate that wavelet transformations and flow duration curves significantly reduce the information content of the streamflow data and consequently unnecessarily increase the uncertainty of the HYMOD model parameters. Hydrologic signatures thus need to be found in the original data, without temporal disaggregation.
Multi scenario seismic hazard assessment for Egypt
Mostafa, Shaimaa Ismail; Abd el-aal, Abd el-aziz Khairy; El-Eraki, Mohamed Ahmed
2018-05-01
Egypt is located in the northeastern corner of Africa within a sensitive seismotectonic location. Earthquakes are concentrated along the active tectonic boundaries of African, Eurasian, and Arabian plates. The study area is characterized by northward increasing sediment thickness leading to more damage to structures in the north due to multiple reflections of seismic waves. Unfortunately, man-made constructions in Egypt were not designed to resist earthquake ground motions. So, it is important to evaluate the seismic hazard to reduce social and economic losses and preserve lives. The probabilistic seismic hazard assessment is used to evaluate the hazard using alternative seismotectonic models within a logic tree framework. Alternate seismotectonic models, magnitude-frequency relations, and various indigenous attenuation relationships were amended within a logic tree formulation to compute and develop the regional exposure on a set of hazard maps. Hazard contour maps are constructed for peak ground acceleration as well as 0.1-, 0.2-, 0.5-, 1-, and 2-s spectral periods for 100 and 475 years return periods for ground motion on rock. The results illustrate that Egypt is characterized by very low to high seismic activity grading from the west to the eastern part of the country. The uniform hazard spectra are estimated at some important cities distributed allover Egypt. The deaggregation of seismic hazard is estimated at some cities to identify the scenario events that contribute to a selected seismic hazard level. The results of this study can be used in seismic microzonation, risk mitigation, and earthquake engineering purposes.
Issues in testing the new national seismic hazard model for Italy
Stein, S.; Peresan, A.; Kossobokov, V. G.; Brooks, E. M.; Spencer, B. D.
2016-12-01
It is important to bear in mind that we know little about how earthquake hazard maps actually describe the shaking that will actually occur in the future, and have no agreed way of assessing how well a map performed in the past, and, thus, whether one map performs better than another. Moreover, we should not forget that different maps can be useful for different end users, who may have different cost-and-benefit strategies. Thus, regardless of the specific tests we chose to use, the adopted testing approach should have several key features: We should assess map performance using all the available instrumental, paleo seismology, and historical intensity data. Instrumental data alone span a period much too short to capture the largest earthquakes - and thus strongest shaking - expected from most faults. We should investigate what causes systematic misfit, if any, between the longest record we have - historical intensity data available for the Italian territory from 217 B.C. to 2002 A.D. - and a given hazard map. We should compare how seismic hazard maps developed over time. How do the most recent maps for Italy compare to earlier ones? It is important to understand local divergences that show how the models are developing to the most recent one. The temporal succession of maps is important: we have to learn from previous errors. We should use the many different tests that have been proposed. All are worth trying, because different metrics of performance show different aspects of how a hazard map performs and can be used. We should compare other maps to the ones we are testing. Maps can be made using a wide variety of assumptions, which will lead to different predicted shaking. It is possible that maps derived by other approaches may perform better. Although Italian current codes are based on probabilistic maps, it is important from both a scientific and societal perspective to look at all options including deterministic scenario based ones. Comparing what works
Directory of Open Access Journals (Sweden)
Kostas Alexandridis
2013-06-01
Full Text Available Assessing spatial model performance often presents challenges related to the choice and suitability of traditional statistical methods in capturing the true validity and dynamics of the predicted outcomes. The stochastic nature of many of our contemporary spatial models of land use change necessitate the testing and development of new and innovative methodologies in statistical spatial assessment. In many cases, spatial model performance depends critically on the spatially-explicit prior distributions, characteristics, availability and prevalence of the variables and factors under study. This study explores the statistical spatial characteristics of statistical model assessment of modeling land use change dynamics in a seven-county study area in South-Eastern Wisconsin during the historical period of 1963–1990. The artificial neural network-based Land Transformation Model (LTM predictions are used to compare simulated with historical land use transformations in urban/suburban landscapes. We introduce a range of Bayesian information entropy statistical spatial metrics for assessing the model performance across multiple simulation testing runs. Bayesian entropic estimates of model performance are compared against information-theoretic stochastic entropy estimates and theoretically-derived accuracy assessments. We argue for the critical role of informational uncertainty across different scales of spatial resolution in informing spatial landscape model assessment. Our analysis reveals how incorporation of spatial and landscape information asymmetry estimates can improve our stochastic assessments of spatial model predictions. Finally our study shows how spatially-explicit entropic classification accuracy estimates can work closely with dynamic modeling methodologies in improving our scientific understanding of landscape change as a complex adaptive system and process.
Directory of Open Access Journals (Sweden)
Tao YANG
2018-05-01
Full Text Available The Auto-Transformer Rectifier Unit (ATRU is one preferred solution for high-power AC/DC power conversion in aircraft. This is mainly due to its simple structure, high reliability and reduced kVA ratings. Indeed, the ATRU has become a preferred AC/DC solution to supply power to the electric environment control system on-board future aircraft. In this paper, a general modelling method for ATRUs is introduced. The developed model is based on the fact that the DC voltage and current are strongly related to the voltage and current vectors at the AC terminals of ATRUs. In this paper, we carry on our research in modelling symmetric 18-pulse ATRUs and develop a generic modelling technique. The developed generic model can study not only symmetric but also asymmetric ATRUs. An 18-pulse asymmetric ATRU is used to demonstrate the accuracy and efficiency of the developed model by comparing with corresponding detailed switching SABER models provided by our industrial partner. The functional models also allow accelerated and accurate simulations and thus enable whole-scale more-electric aircraft electrical power system studies in the future. Keywords: Asymmetric transformer, Functional modelling, More-Electric Aircraft, Multi-pulse rectifier, Transformer rectifier unit
Climate change induced transformations of agricultural systems: insights from a global model
Leclère, D.; Havlík, P.; Fuss, S.; Schmid, E.; Mosnier, A.; Walsh, B.; Valin, H.; Herrero, M.; Khabarov, N.; Obersteiner, M.
2014-12-01
Climate change might impact crop yields considerably and anticipated transformations of agricultural systems are needed in the coming decades to sustain affordable food provision. However, decision-making on transformational shifts in agricultural systems is plagued by uncertainties concerning the nature and geography of climate change, its impacts, and adequate responses. Locking agricultural systems into inadequate transformations costly to adjust is a significant risk and this acts as an incentive to delay action. It is crucial to gain insight into how much transformation is required from agricultural systems, how robust such strategies are, and how we can defuse the associated challenge for decision-making. While implementing a definition related to large changes in resource use into a global impact assessment modelling framework, we find transformational adaptations to be required of agricultural systems in most regions by 2050s in order to cope with climate change. However, these transformations widely differ across climate change scenarios: uncertainties in large-scale development of irrigation span in all continents from 2030s on, and affect two-thirds of regions by 2050s. Meanwhile, significant but uncertain reduction of major agricultural areas affects the Northern Hemisphere’s temperate latitudes, while increases to non-agricultural zones could be large but uncertain in one-third of regions. To help reducing the associated challenge for decision-making, we propose a methodology exploring which, when, where and why transformations could be required and uncertain, by means of scenario analysis.
Climate change induced transformations of agricultural systems: insights from a global model
International Nuclear Information System (INIS)
Leclère, D; Havlík, P; Mosnier, A; Walsh, B; Valin, H; Khabarov, N; Obersteiner, M; Fuss, S; Schmid, E; Herrero, M
2014-01-01
Climate change might impact crop yields considerably and anticipated transformations of agricultural systems are needed in the coming decades to sustain affordable food provision. However, decision-making on transformational shifts in agricultural systems is plagued by uncertainties concerning the nature and geography of climate change, its impacts, and adequate responses. Locking agricultural systems into inadequate transformations costly to adjust is a significant risk and this acts as an incentive to delay action. It is crucial to gain insight into how much transformation is required from agricultural systems, how robust such strategies are, and how we can defuse the associated challenge for decision-making. While implementing a definition related to large changes in resource use into a global impact assessment modelling framework, we find transformational adaptations to be required of agricultural systems in most regions by 2050s in order to cope with climate change. However, these transformations widely differ across climate change scenarios: uncertainties in large-scale development of irrigation span in all continents from 2030s on, and affect two-thirds of regions by 2050s. Meanwhile, significant but uncertain reduction of major agricultural areas affects the Northern Hemisphere’s temperate latitudes, while increases to non-agricultural zones could be large but uncertain in one-third of regions. To help reducing the associated challenge for decision-making, we propose a methodology exploring which, when, where and why transformations could be required and uncertain, by means of scenario analysis. (letter)
Directory of Open Access Journals (Sweden)
Margaretha Ohyver
2016-12-01
Full Text Available Partial Least Squares (PLS method was developed in 1960 by Herman Wold. The method particularly suits with construct a regression model when the number of independent variables is many and highly collinear. The PLS can be combined with other methods, one of which is a Continuous Wavelet Transformation (CWT. By considering that the presence of outliers can lead to a less reliable model, and this kind of transformation may be required at a stage of pre-processing, the data is free of noise or outliers. Based on the previous study, Kendari hotel room occupancy rate was affected by the outlier, and it had a low value of R2. Therefore, this research aimed to obtain a good model by combining the PLS method and CWT transformation using the Mexican Hats them other wavelet of CWT. The research concludes that merging the PLS and the Mexican Hat transformation has resulted in a better model compared to the model that combined the PLS and the Haar wavelet transformation as shown in the previous study. The research shows that by changing the mother of the wavelet, the value of R2 can be improved significantly. The result provides information on how to increase the value of R2. The other advantage is the information for hotel managements to notice the age of the hotel, the maximum rates, the facilities, and the number of rooms to increase the number of visitors.
Dinitz, Laura B.
2008-01-01
With costs of natural disasters skyrocketing and populations increasingly settling in areas vulnerable to natural hazards, society is challenged to better allocate its limited risk-reduction resources. In 2000, Congress passed the Disaster Mitigation Act, amending the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Robert T. Stafford Disaster Relief and Emergency Assistance Act, Pub. L. 93-288, 1988; Federal Emergency Management Agency, 2002, 2008b; Disaster Mitigation Act, 2000), mandating that State, local, and tribal communities prepare natural-hazard mitigation plans to qualify for pre-disaster mitigation grants and post-disaster aid. The Federal Emergency Management Agency (FEMA) was assigned to coordinate and implement hazard-mitigation programs, and it published information about specific mitigation-plan requirements and the mechanisms (through the Hazard Mitigation Grant Program-HMGP) for distributing funds (Federal Emergency Management Agency, 2002). FEMA requires that each community develop a mitigation strategy outlining long-term goals to reduce natural-hazard vulnerability, mitigation objectives and specific actions to reduce the impacts of natural hazards, and an implementation plan for those actions. The implementation plan should explain methods for prioritizing, implementing, and administering the actions, along with a 'cost-benefit review' justifying the prioritization. FEMA, along with the National Institute of Building Sciences (NIBS), supported the development of HAZUS ('Hazards U.S.'), a geospatial natural-hazards loss-estimation tool, to help communities quantify potential losses and to aid in the selection and prioritization of mitigation actions. HAZUS was expanded to a multiple-hazard version, HAZUS-MH, that combines population, building, and natural-hazard science and economic data and models to estimate physical damages, replacement costs, and business interruption for specific natural-hazard scenarios. HAZUS
García-Mayordomo, J.; Gaspar-Escribano, J. M.; Benito, B.
2007-10-01
A probabilistic seismic hazard assessment of the Province of Murcia in terms of peak ground acceleration (PGA) and spectral accelerations [SA( T)] is presented in this paper. In contrast to most of the previous studies in the region, which were performed for PGA making use of intensity-to-PGA relationships, hazard is here calculated in terms of magnitude and using European spectral ground-motion models. Moreover, we have considered the most important faults in the region as specific seismic sources, and also comprehensively reviewed the earthquake catalogue. Hazard calculations are performed following the Probabilistic Seismic Hazard Assessment (PSHA) methodology using a logic tree, which accounts for three different seismic source zonings and three different ground-motion models. Hazard maps in terms of PGA and SA(0.1, 0.2, 0.5, 1.0 and 2.0 s) and coefficient of variation (COV) for the 475-year return period are shown. Subsequent analysis is focused on three sites of the province, namely, the cities of Murcia, Lorca and Cartagena, which are important industrial and tourism centres. Results at these sites have been analysed to evaluate the influence of the different input options. The most important factor affecting the results is the choice of the attenuation relationship, whereas the influence of the selected seismic source zonings appears strongly site dependant. Finally, we have performed an analysis of source contribution to hazard at each of these cities to provide preliminary guidance in devising specific risk scenarios. We have found that local source zones control the hazard for PGA and SA( T ≤ 1.0 s), although contribution from specific fault sources and long-distance north Algerian sources becomes significant from SA(0.5 s) onwards.
Hazard avoidance via descent images for safe landing
Yan, Ruicheng; Cao, Zhiguo; Zhu, Lei; Fang, Zhiwen
2013-10-01
In planetary or lunar landing missions, hazard avoidance is critical for landing safety. Therefore, it is very important to correctly detect hazards and effectively find a safe landing area during the last stage of descent. In this paper, we propose a passive sensing based HDA (hazard detection and avoidance) approach via descent images to lower the landing risk. In hazard detection stage, a statistical probability model on the basis of the hazard similarity is adopted to evaluate the image and detect hazardous areas, so that a binary hazard image can be generated. Afterwards, a safety coefficient, which jointly utilized the proportion of hazards in the local region and the inside hazard distribution, is proposed to find potential regions with less hazards in the binary hazard image. By using the safety coefficient in a coarse-to-fine procedure and combining it with the local ISD (intensity standard deviation) measure, the safe landing area is determined. The algorithm is evaluated and verified with many simulated descent downward looking images rendered from lunar orbital satellite images.
Iverson, Richard M.; LeVeque, Randall J.
2009-01-01
A recent workshop at the University of Washington focused on mathematical and computational aspects of modeling the dynamics of dense, gravity-driven mass movements such as rock avalanches and debris flows. About 30 participants came from seven countries and brought diverse backgrounds in geophysics; geology; physics; applied and computational mathematics; and civil, mechanical, and geotechnical engineering. The workshop was cosponsored by the U.S. Geological Survey Volcano Hazards Program, by the U.S. National Science Foundation through a Vertical Integration of Research and Education (VIGRE) in the Mathematical Sciences grant to the University of Washington, and by the Pacific Institute for the Mathematical Sciences. It began with a day of lectures open to the academic community at large and concluded with 2 days of focused discussions and collaborative work among the participants.
Coupling Radar Rainfall Estimation and Hydrological Modelling For Flash-flood Hazard Mitigation
Borga, M.; Creutin, J. D.
Flood risk mitigation is accomplished through managing either or both the hazard and vulnerability. Flood hazard may be reduced through structural measures which alter the frequency of flood levels in the area. The vulnerability of a community to flood loss can be mitigated through changing or regulating land use and through flood warning and effective emergency response. When dealing with flash-flood hazard, it is gener- ally accepted that the most effective way (and in many instances the only affordable in a sustainable perspective) to mitigate the risk is by reducing the vulnerability of the involved communities, in particular by implementing flood warning systems and community self-help programs. However, both the inherent characteristics of the at- mospheric and hydrologic processes involved in flash-flooding and the changing soci- etal needs provide a tremendous challenge to traditional flood forecasting and warning concepts. In fact, the targets of these systems are traditionally localised like urbanised sectors or hydraulic structures. Given the small spatial scale that characterises flash floods and the development of dispersed urbanisation, transportation, green tourism and water sports, human lives and property are exposed to flash flood risk in a scat- tered manner. This must be taken into consideration in flash flood warning strategies and the investigated region should be considered as a whole and every section of the drainage network as a potential target for hydrological warnings. Radar technology offers the potential to provide information describing rain intensities almost contin- uously in time and space. Recent research results indicate that coupling radar infor- mation to distributed hydrologic modelling can provide hydrologic forecasts at all potentially flooded points of a region. Nevertheless, very few flood warning services use radar data more than on a qualitative basis. After a short review of current under- standing in this area, two
Detailed High Frequency Models of Various Winding Types in Power Transformers
DEFF Research Database (Denmark)
Pedersen, Kenneth; Lunow, Morten Erlandsson; Holbøll, Joachim
2005-01-01
Abstract--In this paper, techniques are described which demonstrate how a highly detailed internal transformer model can be obtained systematically with Matlab and how it can be prepared for subsequent transient analysis. The input of such a model will mainly be the description of the cross secti...... equivalent circuit. Finally a new circuit extraction technique is proposed for vector fitted impedance matrices for more efficient computation....
LAV@HAZARD: a web-GIS interface for volcanic hazard assessment
Directory of Open Access Journals (Sweden)
Giovanni Gallo
2011-12-01
Full Text Available Satellite data, radiative power of hot spots as measured with remote sensing, historical records, on site geological surveys, digital elevation model data, and simulation results together provide a massive data source to investigate the behavior of active volcanoes like Mount Etna (Sicily, Italy over recent times. The integration of these heterogeneous data into a coherent visualization framework is important for their practical exploitation. It is crucial to fill in the gap between experimental and numerical data, and the direct human perception of their meaning. Indeed, the people in charge of safety planning of an area need to be able to quickly assess hazards and other relevant issues even during critical situations. With this in mind, we developed LAV@HAZARD, a web-based geographic information system that provides an interface for the collection of all of the products coming from the LAVA project research activities. LAV@HAZARD is based on Google Maps application programming interface, a choice motivated by its ease of use and the user-friendly interactive environment it provides. In particular, the web structure consists of four modules for satellite applications (time-space evolution of hot spots, radiant flux and effusion rate, hazard map visualization, a database of ca. 30,000 lava-flow simulations, and real-time scenario forecasting by MAGFLOW on Compute Unified Device Architecture.
Modular transformations of conformal blocks in WZW models on Riemann surfaces of higher genus
International Nuclear Information System (INIS)
Miao Li; Ming Yu.
1989-05-01
We derive the modular transformations for conformal blocks in Wess-Zumino-Witten models on Riemann surfaces of higher genus. The basic ingredient consists of using the Chern-Simons theory developed by Witten. We find that the modular transformations generated by Dehn twists are linear combinations of Wilson line operators, which can be expressed in terms of braiding matrices. It can also be shown that modular transformation matrices for g > 0 Riemann surfaces depend only on those for g ≤ 3. (author). 13 refs, 15 figs
Krieger, Nancy; Kaddour, Afamia; Koenen, Karestan; Kosheleva, Anna; Chen, Jarvis T; Waterman, Pamela D; Barbeau, Elizabeth M
2011-03-01
Few studies have simultaneously included exposure information on occupational hazards, relationship hazards (eg, intimate partner violence) and social hazards (eg, poverty and racial discrimination), especially among low-income multiracial/ethnic populations. A cross-sectional study (2003-2004) of 1202 workers employed at 14 worksites in the greater Boston area of Massachusetts investigated the independent and joint association of occupational, social and relationship hazards with psychological distress (K6 scale). Among this low-income cohort (45% were below the US poverty line), exposure to occupational, social and relationship hazards, per the 'inverse hazard law,' was high: 82% exposed to at least one occupational hazard, 79% to at least one social hazard, and 32% of men and 34% of women, respectively, stated they had been the perpetrator or target of intimate partner violence (IPV). Fully 15.4% had clinically significant psychological distress scores (K6 score ≥ 13). All three types of hazards, and also poverty, were independently associated with increased risk of psychological distress. In models including all three hazards, however, significant associations with psychological distress occurred among men and women for workplace abuse and high exposure to racial discrimination only; among men, for IPV; and among women, for high exposure to occupational hazards, poverty and smoking. Reckoning with the joint and embodied reality of diverse types of hazards involving how people live and work is necessary for understanding determinants of health status.
Development of evaluation method for software hazard identification techniques
International Nuclear Information System (INIS)
Huang, H. W.; Chen, M. H.; Shih, C.; Yih, S.; Kuo, C. T.; Wang, L. H.; Yu, Y. C.; Chen, C. W.
2006-01-01
This research evaluated the applicable software hazard identification techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flow-graph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/noise ratio, complexity, and implementation cost. By this proposed method, the analysts can evaluate various software hazard identification combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (with transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and simulation-based model-analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantages are the completeness complexity and implementation cost. This evaluation method can be a platform to reach common consensus for the stakeholders. Following the evolution of software hazard identification techniques, the evaluation results could be changed. However, the insight of software hazard identification techniques is much more important than the numbers obtained by the evaluation. (authors)
KSC VAB Aeroacoustic Hazard Assessment
Oliveira, Justin M.; Yedo, Sabrina; Campbell, Michael D.; Atkinson, Joseph P.
2010-01-01
NASA Kennedy Space Center (KSC) carried out an analysis of the effects of aeroacoustics produced by stationary solid rocket motors in processing areas at KSC. In the current paper, attention is directed toward the acoustic effects of a motor burning within the Vehicle Assembly Building (VAB). The analysis was carried out with support from ASRC Aerospace who modeled transmission effects into surrounding facilities. Calculations were done using semi-analytical models for both aeroacoustics and transmission. From the results it was concluded that acoustic hazards in proximity to the source of ignition and plume can be severe; acoustic hazards in the far-field are significantly lower.
Hazards assessment for the Hazardous Waste Storage Facility
International Nuclear Information System (INIS)
Knudsen, J.K.; Calley, M.B.
1994-04-01
This report documents the hazards assessment for the Hazardous Waste Storage Facility (HWSF) located at the Idaho National Engineering Laboratory. The hazards assessment was performed to ensure that this facility complies with DOE and company requirements pertaining to emergency planning and preparedness for operational emergencies. The hazards assessment identifies and analyzes hazards that are significant enough to warrant consideration in a facility's operational emergency management program. The area surrounding HWSF, the buildings and structures at HWSF, and the processes used at HWSF are described in this report. All nonradiological hazardous materials at the HWSF were identified (radiological hazardous materials are not stored at HWSF) and screened against threshold quantities according to DOE Order 5500.3A guidance. Two of the identified hazardous materials exceeded their specified threshold quantity. This report discusses the potential release scenarios and consequences associated with an accidental release for each of the two identified hazardous materials, lead and mercury. Emergency considerations, such as emergency planning zones, emergency classes, protective actions, and emergency action levels, are also discussed based on the analysis of potential consequences. Evaluation of the potential consequences indicated that the highest emergency class for operational emergencies at the HWSF would be a Site Area Emergency
Transforming process models : executable rewrite rules versus a formalized Java program
Van Gorp, P.M.E.; Eshuis, H.; Petriu, D.C.; Rouquette, N.
2010-01-01
In the business process management community, transformations for process models are usually programmed using imperative languages (such as Java). The underlying mapping rules tend to be documented using informal visual rules whereas they tend to be formalized using mathematical set constructs. In
Transforming process models : executable rewrite rules versus a formalized Java program
Van Gorp, P.M.E.; Eshuis, H.
2010-01-01
In the business process management community, transformations for process models are usually programmed using imperative languages. The underlying mapping rules tend to be documented using informal visual rules whereas they tend to be formalized using mathematical set constructs. In the Graph and
International Nuclear Information System (INIS)
Duan Changkui; Gong Yungui; Dong Huining; Reid, Michael F.
2004-01-01
Effective interaction operators usually act on a restricted model space and give the same energies (for Hamiltonian) and matrix elements (for transition operators, etc.) as those of the original operators between the corresponding true eigenstates. Various types of effective operators are possible. Those well defined effective operators have been shown to be related to each other by similarity transformation. Some of the effective operators have been shown to have connected-diagram expansions. It is shown in this paper that under a class of very general similarity transformations, the connectivity is conserved. The similarity transformation between Hermitian and non-Hermitian Rayleigh-Schroedinger perturbative effective operators is one of such transformations and hence the connectivity can be deducted from each other
Duan, Chang-Kui; Gong, Yungui; Dong, Hui-Ning; Reid, Michael F
2004-09-15
Effective interaction operators usually act on a restricted model space and give the same energies (for Hamiltonian) and matrix elements (for transition operators, etc.) as those of the original operators between the corresponding true eigenstates. Various types of effective operators are possible. Those well defined effective operators have been shown to be related to each other by similarity transformation. Some of the effective operators have been shown to have connected-diagram expansions. It is shown in this paper that under a class of very general similarity transformations, the connectivity is conserved. The similarity transformation between Hermitian and non-Hermitian Rayleigh-Schrodinger perturbative effective operators is one of such transformations and hence the connectivity can be deducted from each other.
Measurements and models for hazardous chemical and mixed wastes. 1998 annual progress report
International Nuclear Information System (INIS)
Holcomb, C.; Louie, B.; Mullins, M.E.; Outcalt, S.L.; Rogers, T.N.; Watts, L.
1998-01-01
'Aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the US. A large quantity of the waste generated by the US chemical process industry is waste water. In addition, the majority of the waste inventory at DoE sites previously used for nuclear weapons production is aqueous waste. Large quantities of additional aqueous waste are expected to be generated during the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical property information is paramount. This knowledge will lead to huge savings by aiding in the design and optimization of treatment and disposal processes. The main objectives of this project are: Develop and validate models that accurately predict the phase equilibria and thermodynamic properties of hazardous aqueous systems necessary for the safe handling and successful design of separation and treatment processes for hazardous chemical and mixed wastes. Accurately measure the phase equilibria and thermodynamic properties of a representative system (water + acetone + isopropyl alcohol + sodium nitrate) over the applicable ranges of temperature, pressure, and composition to provide the pure component, binary, ternary, and quaternary experimental data required for model development. As of May, 1998, nine months into the first year of a three year project, the authors have made significant progress in the database development, have begun testing the models, and have been performance testing the apparatus on the pure components.'
SRS BEDROCK PROBABILISTIC SEISMIC HAZARD ANALYSIS (PSHA) DESIGN BASIS JUSTIFICATION (U)
Energy Technology Data Exchange (ETDEWEB)
(NOEMAIL), R
2005-12-14
This represents an assessment of the available Savannah River Site (SRS) hard-rock probabilistic seismic hazard assessments (PSHAs), including PSHAs recently completed, for incorporation in the SRS seismic hazard update. The prior assessment of the SRS seismic design basis (WSRC, 1997) incorporated the results from two PSHAs that were published in 1988 and 1993. Because of the vintage of these studies, an assessment is necessary to establish the value of these PSHAs considering more recently collected data affecting seismic hazards and the availability of more recent PSHAs. This task is consistent with the Department of Energy (DOE) order, DOE O 420.1B and DOE guidance document DOE G 420.1-2. Following DOE guidance, the National Map Hazard was reviewed and incorporated in this assessment. In addition to the National Map hazard, alternative ground motion attenuation models (GMAMs) are used with the National Map source model to produce alternate hazard assessments for the SRS. These hazard assessments are the basis for the updated hard-rock hazard recommendation made in this report. The development and comparison of hazard based on the National Map models and PSHAs completed using alternate GMAMs provides increased confidence in this hazard recommendation. The alternate GMAMs are the EPRI (2004), USGS (2002) and a regional specific model (Silva et al., 2004). Weights of 0.6, 0.3 and 0.1 are recommended for EPRI (2004), USGS (2002) and Silva et al. (2004) respectively. This weighting gives cluster weights of .39, .29, .15, .17 for the 1-corner, 2-corner, hybrid, and Greens-function models, respectively. This assessment is judged to be conservative as compared to WSRC (1997) and incorporates the range of prevailing expert opinion pertinent to the development of seismic hazard at the SRS. The corresponding SRS hard-rock uniform hazard spectra are greater than the design spectra developed in WSRC (1997) that were based on the LLNL (1993) and EPRI (1988) PSHAs. The
Krishna, Akhouri P.; Kumar, Santosh
2013-10-01
Landslide hazard assessments using computational models, such as artificial neural network (ANN) and frequency ratio (FR), were carried out covering one of the important mountain highways in the Central Himalaya of Indian Himalayan Region (IHR). Landslide influencing factors were either calculated or extracted from spatial databases including recent remote sensing data of LANDSAT TM, CARTOSAT digital elevation model (DEM) and Tropical Rainfall Measuring Mission (TRMM) satellite for rainfall data. ANN was implemented using the multi-layered feed forward architecture with different input, output and hidden layers. This model based on back propagation algorithm derived weights for all possible parameters of landslides and causative factors considered. The training sites for landslide prone and non-prone areas were identified and verified through details gathered from remote sensing and other sources. Frequency Ratio (FR) models are based on observed relationships between the distribution of landslides and each landslide related factor. FR model implementation proved useful for assessing the spatial relationships between landslide locations and factors contributing to its occurrence. Above computational models generated respective susceptibility maps of landslide hazard for the study area. This further allowed the simulation of landslide hazard maps on a medium scale using GIS platform and remote sensing data. Upon validation and accuracy checks, it was observed that both models produced good results with FR having some edge over ANN based mapping. Such statistical and functional models led to better understanding of relationships between the landslides and preparatory factors as well as ensuring lesser levels of subjectivity compared to qualitative approaches.
DEFF Research Database (Denmark)
Nordahl, H; Rod, NH; Frederiksen, BL
2013-01-01
seven Danish cohort studies were linked to registry data on education and incidence of CHD. Mediation by smoking, low physical activity, and body mass index (BMI) on the association between education and CHD were estimated by applying newly proposed methods for mediation based on the additive hazards...... % CI: 12, 22) for women and 37 (95 % CI: 28, 46) for men could be ascribed to the pathway through smoking. Further, 39 (95 % CI: 30, 49) cases for women and 94 (95 % CI: 79, 110) cases for men could be ascribed to the pathway through BMI. The effects of low physical activity were negligible. Using...... contemporary methods, the additive hazards model, for mediation we indicated the absolute numbers of CHD cases prevented when modifying smoking and BMI. This study confirms previous claims based on the Cox proportional hazards model that behavioral risk factors partially mediates the effect of education on CHD...
The effects of hazardous working conditions on burnout in Macau nurses
Sydney X. Hu; Andrew L. Luk; Graeme D. Smith
2015-01-01
Objective: To examine the effects of various hazardous factors in working environments on burnout in a cohort of clinical nurses in Macau. Methods: A cross-sectional survey was used to examine specific workplace hazards for burnout in qualified nurses (n = 424) in Macau. Structural equation modeling (SEM) was used to analyze relationships between specific hazards and manifestations of burnout. Results: In the final model, workplace hazards accounted for 73% of the variance of burnout wi...
Zosseder, K.; Post, J.; Steinmetz, T.; Wegscheider, S.; Strunz, G.
2009-04-01
Indonesia is located at one of the most active geological subduction zones in the world. Following the most recent seaquakes and their subsequent tsunamis in December 2004 and July 2006 it is expected that also in the near future tsunamis are likely to occur due to increased tectonic tensions leading to abrupt vertical seafloor alterations after a century of relative tectonic silence. To face this devastating threat tsunami hazard maps are very important as base for evacuation planning and mitigation strategies. In terms of a tsunami impact the hazard assessment is mostly covered by numerical modelling because the model results normally offer the most precise database for a hazard analysis as they include spatially distributed data and their influence to the hydraulic dynamics. Generally a model result gives a probability for the intensity distribution of a tsunami at the coast (or run up) and the spatial distribution of the maximum inundation area depending on the location and magnitude of the tsunami source used. The boundary condition of the source used for the model is mostly chosen by a worst case approach. Hence the location and magnitude which are likely to occur and which are assumed to generate the worst impact are used to predict the impact at a specific area. But for a tsunami hazard assessment covering a large coastal area, as it is demanded in the GITEWS (German Indonesian Tsunami Early Warning System) project in which the present work is embedded, this approach is not practicable because a lot of tsunami sources can cause an impact at the coast and must be considered. Thus a multi-scenario tsunami model approach is developed to provide a reliable hazard assessment covering large areas. For the Indonesian Early Warning System many tsunami scenarios were modelled by the Alfred Wegener Institute (AWI) at different probable tsunami sources and with different magnitudes along the Sunda Trench. Every modelled scenario delivers the spatial distribution of
Development of seismic hazard analysis in Japan
International Nuclear Information System (INIS)
Itoh, T.; Ishii, K.; Ishikawa, Y.; Okumura, T.
1987-01-01
In recent years, seismic risk assessment of the nuclear power plant have been conducted increasingly in various countries, particularly in the United States to evaluate probabilistically the safety of existing plants under earthquake loading. The first step of the seismic risk assessment is the seismic hazard analysis, in which the relationship between the maximum earthquake ground motions at the plant site and their annual probability of exceedance, i.e. the seismic hazard curve, is estimated. In this paper, seismic hazard curves are evaluated and examined based on historical earthquake records model, in which seismic sources are modeled with area-sources, for several different sites in Japan. A new evaluation method is also proposed to compute the response spectra of the earthquake ground motions in connection with estimating the probabilistic structural response. Finally the numerical result of probabilistic risk assessment for a base-isolated three story RC structure, in which the frequency of seismic induced structural failure is evaluated combining the seismic hazard analysis, is described briefly
Energy Technology Data Exchange (ETDEWEB)
Escarela-Perez, R. [Departamento de Energia, Universidad Autonoma Metropolitana, Av. San Pablo 180, Col. Reynosa, C.P. 02200, Mexico D.F. (Mexico); Kulkarni, S.V. [Electrical Engineering Department, Indian Institute of Technology, Bombay (India); Melgoza, E. [Instituto Tecnologico de Morelia, Av. Tecnologico 1500, Morelia, Mich., C.P. 58120 (Mexico)
2008-11-15
A six-port impedance network for a three-phase transformer is obtained from a 3D time-harmonic finite-element (FE) model. The network model properly captures the eddy current effects of the transformer tank and frame. All theorems and tools of passive linear networks can be used with the multi-port model to simulate several important operating conditions without resorting anymore to computationally expensive 3D FE simulations. The results of the network model are of the same quality as those produced by the FE program. Although the passive network may seem limited by the assumption of linearity, many important transformer operating conditions imply unsaturated states. Single-phase load-loss measurements are employed to demonstrate the effectiveness of the network model and to understand phenomena that could not be explained with conventional equivalent circuits. In addition, formal deduction of novel closed-form formulae is presented for the calculation of the leakage impedance measured at the high and low voltage sides of the transformer. (author)
Modeling of a ring rosen-type piezoelectric transformer by Hamilton's principle.
Nadal, Clément; Pigache, Francois; Erhart, Jiří
2015-04-01
This paper deals with the analytical modeling of a ring Rosen-type piezoelectric transformer. The developed model is based on a Hamiltonian approach, enabling to obtain main parameters and performance evaluation for the first radial vibratory modes. Methodology is detailed, and final results, both the input admittance and the electric potential distribution on the surface of the secondary part, are compared with numerical and experimental ones for discussion and validation.
SECOND ORDER LEAST SQUARE ESTIMATION ON ARCH(1 MODEL WITH BOX-COX TRANSFORMED DEPENDENT VARIABLE
Directory of Open Access Journals (Sweden)
Herni Utami
2014-03-01
Full Text Available Box-Cox transformation is often used to reduce heterogeneity and to achieve a symmetric distribution of response variable. In this paper, we estimate the parameters of Box-Cox transformed ARCH(1 model using second-order leastsquare method and then we study the consistency and asymptotic normality for second-order least square (SLS estimators. The SLS estimation was introduced byWang (2003, 2004 to estimate the parameters of nonlinear regression models with independent and identically distributed errors
Hashemi, Hoda Sadat; Boily, Mathieu; Martineau, Paul A.; Rivaz, Hassan
2017-03-01
Ultrasound elastography entails imaging mechanical properties of tissue and is therefore of significant clinical importance. In elastography, two frames of radio-frequency (RF) ultrasound data that are obtained while the tissue is undergoing deformation, and the time-delay estimate (TDE) between the two frames is used to infer mechanical properties of tissue. TDE is a critical step in elastography, and is challenging due to noise and signal decorrelation. This paper presents a novel and robust technique TDE using all samples of RF data simultaneously. We assume tissue deformation can be approximated by an affine transformation, and hence call our method ATME (Affine Transformation Model Elastography). The affine transformation model is utilized to obtain initial estimates of axial and lateral displacement fields. The affine transformation only has six degrees of freedom (DOF), and as such, can be efficiently estimated. A nonlinear cost function that incorporates similarity of RF data intensity and prior information of displacement continuity is formulated to fine-tune the initial affine deformation field. Optimization of this function involves searching for TDE of all samples of the RF data. The optimization problem is converted to a sparse linear system of equations, which can be solved in real-time. Results on simulation are presented for validation. We further collect RF data from in-vivo patellar tendon and medial collateral ligament (MCL), and show that ATME can be used to accurately track tissue displacement.
Improved modeling of new three-phase high voltage transformer with magnetic shunts
Directory of Open Access Journals (Sweden)
Chraygane M.
2015-03-01
Full Text Available This original paper deals with a new approach for the study of behavior in nonlinear regime of a new three-phase high voltage power supply for magnetrons, used for the microwave generators in industrial applications. The design of this system is composed of a new three-phase leakage flux transformer supplying by phase a cell, composed of a capacitor and a diode, which multiplies the voltage and stabilizes the current. Each cell. in turn, supplies a single magnetron. An equivalent model of this transformer is developed taking into account the saturation phenomenon and the stabilization process of each magnetron. Each inductance of the model is characterized by a non linear relation between flux and current. This model was tested by EMTP software near the nominal state. The theoretical results were compared to experimental measurements with a good agreement. Relative to the current device, the new systemprovides gains of size, volume, cost of implementation and maintenance which make it more economical.
Directory of Open Access Journals (Sweden)
Pieter-Jan Vlok
2012-01-01
Full Text Available
ENGLISH ABSTRACT: Increased competitiveness in the production world necessitates improved maintenance strategies to increase availabilities and drive down cost . The maintenance engineer is thus faced with the need to make more intelligent pre ventive renewal decisions . Two of the main techniques to achieve this is through Condition Monitoring (such as vibrat ion monitoring and oil anal ysis and Statistical Failure Analysis (typically using probabilistic techniques . The present paper discusses these techniques, their uses and weaknesses and then presents th e Proportional Hazard Model as an solution to most of these weaknesses. It then goes on to compare the results of the different techniques in monetary terms, using a South African case study. This comparison shows clearly that the Proportional Hazards Model is sup erior to the present t echniques and should be the preferred model for many actual maintenance situations.
AFRIKAANSE OPSOMMING: Verhoogde vlakke van mededinging in die produksie omgewing noodsaak verbeterde instandhouding strategies om beskikbaarheid van toerusting te verhoog en koste te minimeer. Instandhoudingsingenieurs moet gevolglik meer intellegente voorkomende hernuwings besluite neem. Twee prominente tegnieke om hierdie doelwit te bereik is Toestandsmonitering (soos vibrasie monitering of olie analise en Statistiese Falingsanalise (gewoonlik m.b.v. probabilistiese metodes. In hierdie artikel beskou ons beide hierdie tegnieke, hulle gebruike en tekortkominge en stel dan die Proporsionele Gevaarkoers Model voor as 'n oplossing vir meeste van die tekortkominge. Die artikel vergelyk ook die verskillende tegnieke in geldelike terme deur gebruik te maak van 'n Suid-Afrikaanse gevalle studie. Hierdie vergelyking wys duidelik-uit dat die Proporsionele Gevaarkoers Model groter beloft e inhou as die huidige tegni eke en dat dit die voorkeur oplossing behoort te wees in baie werklike instandhoudings situasies.
Probabilistic Seismic Hazard Analysis for Yemen
Directory of Open Access Journals (Sweden)
Rakesh Mohindra
2012-01-01
Full Text Available A stochastic-event probabilistic seismic hazard model, which can be used further for estimates of seismic loss and seismic risk analysis, has been developed for the territory of Yemen. An updated composite earthquake catalogue has been compiled using the databases from two basic sources and several research publications. The spatial distribution of earthquakes from the catalogue was used to define and characterize the regional earthquake source zones for Yemen. To capture all possible scenarios in the seismic hazard model, a stochastic event set has been created consisting of 15,986 events generated from 1,583 fault segments in the delineated seismic source zones. Distribution of horizontal peak ground acceleration (PGA was calculated for all stochastic events considering epistemic uncertainty in ground-motion modeling using three suitable ground motion-prediction relationships, which were applied with equal weight. The probabilistic seismic hazard maps were created showing PGA and MSK seismic intensity at 10% and 50% probability of exceedance in 50 years, considering local soil site conditions. The resulting PGA for 10% probability of exceedance in 50 years (return period 475 years ranges from 0.2 g to 0.3 g in western Yemen and generally is less than 0.05 g across central and eastern Yemen. The largest contributors to Yemen’s seismic hazard are the events from the West Arabian Shield seismic zone.
Hazardous materials management and compliance training
International Nuclear Information System (INIS)
Dalton, T.F.
1991-01-01
OSHA training for hazardous waste site workers is required by the Superfund Amendments and Reauthorization Act of 1986 (SARA). In December 1986, a series of regulations was promulgated by OSHA on an interim basis calling for the training of workers engaged in hazardous waste operations. Subsequent to these interim regulations, final rules were promulgated and these final rules on hazardous waste operations and emergency response became effective on March 6, 1990. OSHA has conducted hearings on the accreditation of training programs. OSHA would like to follow the accreditation process under the AHERA regulations for asbestos, in which the model plan for accreditation of asbestos abatement training was included in Section 206 of Title 11 of the Toxic Substance Control Act (TSCA). OSHA proposed on January 26, 1990, to perform the accreditation of training programs for hazardous waste operations and that proposal suggested that they follow the model plan similar to the one used for AHERA. They did not propose to accredited training programs for workers engaged in emergency response. These new regulations pose a significant problem to the various contractors and emergency responders who deal with hazardous materials spill response, cleanup and site remediation since these programs have expanded so quickly that many people are not familiar with what particular segment of the training they are required to have and whether or not programs that have yet to be accredited are satisfactory for this type of training. Title III of SARA stipulates a training program for first responders which includes local emergency response organizations such as firemen and policemen. The purpose of this paper is to discuss the needs of workers at hazardous waste site remediation projects and workers who are dealing with hazardous substances, spill response and cleanup
Directory of Open Access Journals (Sweden)
Rasool Mahdavi Najafabadi
2016-01-01
Full Text Available In this paper, among multi-criteria models for complex decision-making and multiple-attribute models for assigning the most preferable choice, the technique for order preference by similarity ideal solution (TOPSIS is implied. The main objective of this research is to identify potential natural hazards in Bandar Abbas city, Iran, using TOPSIS model, which is based on an analytical hierarchy process structure. A set of 12 relevant geomorphologic parameters, including earthquake frequency, distance from the earthquake epicentre, number of faults, flood, talus creep, landslide, land subsidence, tide, hurricane and tidal wave, dust storms with external source, wind erosion and sea level fluctuations are considered to quantify inputs of the model. The outputs of this study indicate that one region, among three assessed regions, has the maximum potential occurrence of natural hazards, while it has been urbanized at a greater rate compared to other regions. Furthermore, based on Delphi method, the earthquake frequency and the landslide are the most and the least dangerous phenomena, respectively.
Anderson, E. R.; Griffin, R.; Irwin, D.
2013-12-01
Heavy rains and steep, volcanic slopes in El Salvador cause numerous landslides every year, posing a persistent threat to the population, economy and environment. Although potential debris inundation hazard zones have been delineated using digital elevation models (DEMs), some disparities exist between the simulated zones and actual affected areas. Moreover, these hazard zones have only been identified for volcanic lahars and not the shallow landslides that occur nearly every year. This is despite the availability of tools to delineate a variety of landslide types (e.g., the USGS-developed LAHARZ software). Limitations in DEM spatial resolution, age of the data, and hydrological preprocessing techniques can contribute to inaccurate hazard zone definitions. This study investigates the impacts of using different elevation models and pit filling techniques in the final debris hazard zone delineations, in an effort to determine which combination of methods most closely agrees with observed landslide events. In particular, a national DEM digitized from topographic sheets from the 1970s and 1980s provide an elevation product at a 10 meter resolution. Both natural and anthropogenic modifications of the terrain limit the accuracy of current landslide hazard assessments derived from this source. Global products from the Shuttle Radar Topography Mission (SRTM) and the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global DEM (ASTER GDEM) offer more recent data but at the cost of spatial resolution. New data derived from the NASA Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) in 2013 provides the opportunity to update hazard zones at a higher spatial resolution (approximately 6 meters). Hydrological filling of sinks or pits for current hazard zone simulation has previously been achieved through ArcInfo spatial analyst. Such hydrological processing typically only fills pits and can lead to drastic modifications of original elevation values
A High-Rate, Single-Crystal Model including Phase Transformations, Plastic Slip, and Twinning
Energy Technology Data Exchange (ETDEWEB)
Addessio, Francis L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Bronkhorst, Curt Allan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Bolme, Cynthia Anne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Explosive Science and Shock Physics Division; Brown, Donald William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Cerreta, Ellen Kathleen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Lebensohn, Ricardo A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Lookman, Turab [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Mayeur, Jason Rhea [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Morrow, Benjamin M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Rigg, Paulo A. [Washington State Univ., Pullman, WA (United States). Dept. of Physics. Inst. for Shock Physics
2016-08-09
An anisotropic, rate-dependent, single-crystal approach for modeling materials under the conditions of high strain rates and pressures is provided. The model includes the effects of large deformations, nonlinear elasticity, phase transformations, and plastic slip and twinning. It is envisioned that the model may be used to examine these coupled effects on the local deformation of materials that are subjected to ballistic impact or explosive loading. The model is formulated using a multiplicative decomposition of the deformation gradient. A plate impact experiment on a multi-crystal sample of titanium was conducted. The particle velocities at the back surface of three crystal orientations relative to the direction of impact were measured. Molecular dynamics simulations were conducted to investigate the details of the high-rate deformation and pursue issues related to the phase transformation for titanium. Simulations using the single crystal model were conducted and compared to the high-rate experimental data for the impact loaded single crystals. The model was found to capture the features of the experiments.
High resolution global flood hazard map from physically-based hydrologic and hydraulic models.
Begnudelli, L.; Kaheil, Y.; McCollum, J.
2017-12-01
The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak
Trimming a hazard logic tree with a new model-order-reduction technique
Porter, Keith; Field, Edward; Milner, Kevin R
2017-01-01
The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.
Hofstede, ter F.; Wedel, M.
1998-01-01
This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are
Paukatong, K V; Kunawasen, S
2001-01-01
Nham is a traditional Thai fermented pork sausage. The major ingredients of Nham are ground pork meat and shredded pork rind. Nham has been reported to be contaminated with Salmonella spp., Staphylococcus aureus, and Listeria monocytogenes. Therefore, it is a potential cause of foodborne diseases for consumers. A Hazard Analysis and Critical Control Points (HACCP) generic model has been developed for the Nham process. Nham processing plants were observed and a generic flow diagram of Nham processes was constructed.