WorldWideScience

Sample records for solvation models reliable

  1. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    Energy Technology Data Exchange (ETDEWEB)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, both COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.

  2. Generalized Born Models of Macromolecular Solvation Effects

    Science.gov (United States)

    Bashford, Donald; Case, David A.

    2000-10-01

    It would often be useful in computer simulations to use a simple description of solvation effects, instead of explicitly representing the individual solvent molecules. Continuum dielectric models often work well in describing the thermodynamic aspects of aqueous solvation, and approximations to such models that avoid the need to solve the Poisson equation are attractive because of their computational efficiency. Here we give an overview of one such approximation, the generalized Born model, which is simple and fast enough to be used for molecular dynamics simulations of proteins and nucleic acids. We discuss its strengths and weaknesses, both for its fidelity to the underlying continuum model and for its ability to replace explicit consideration of solvent molecules in macromolecular simulations. We focus particularly on versions of the generalized Born model that have a pair-wise analytical form, and therefore fit most naturally into conventional molecular mechanics calculations.

  3. Advanced dielectric continuum model of preferential solvation

    Science.gov (United States)

    Basilevsky, Mikhail; Odinokov, Alexey; Nikitina, Ekaterina; Grigoriev, Fedor; Petrov, Nikolai; Alfimov, Mikhail

    2009-01-01

    A continuum model for solvation effects in binary solvent mixtures is formulated in terms of the density functional theory. The presence of two variables, namely, the dimensionless solvent composition y and the dimensionless total solvent density z, is an essential feature of binary systems. Their coupling, hidden in the structure of the local dielectric permittivity function, is postulated at the phenomenological level. Local equilibrium conditions are derived by a variation in the free energy functional expressed in terms of the composition and density variables. They appear as a pair of coupled equations defining y and z as spatial distributions. We consider the simplest spherically symmetric case of the Born-type ion immersed in the benzene/dimethylsulfoxide (DMSO) solvent mixture. The profiles of y(R ) and z(R ) along the radius R, which measures the distance from the ion center, are found in molecular dynamics (MD) simulations. It is shown that for a given solute ion z(R ) does not depend significantly on the composition variable y. A simplified solution is then obtained by inserting z(R ), found in the MD simulation for the pure DMSO, in the single equation which defines y(R ). In this way composition dependences of the main solvation effects are investigated. The local density augmentation appears as a peak of z(R ) at the ion boundary. It is responsible for the fine solvation effects missing when the ordinary solvation theories, in which z =1, are applied. These phenomena, studied for negative ions, reproduce consistently the simulation results. For positive ions the simulation shows that z ≫1 (z =5-6 at the maximum of the z peak), which means that an extremely dense solvation shell is formed. In such a situation the continuum description fails to be valid within a consistent parametrization.

  4. Molecular modeling of nucleic Acid structure: electrostatics and solvation.

    Science.gov (United States)

    Bergonzo, Christina; Galindo-Murillo, Rodrigo; Cheatham, Thomas E

    2014-12-19

    This unit presents an overview of computer simulation techniques as applied to nucleic acid systems, ranging from simple in vacuo molecular modeling techniques to more complete all-atom molecular dynamics treatments that include an explicit representation of the environment. The third in a series of four units, this unit focuses on critical issues in solvation and the treatment of electrostatics. UNITS 7.5 & 7.8 introduced the modeling of nucleic acid structure at the molecular level. This included a discussion of how to generate an initial model, how to evaluate the utility or reliability of a given model, and ultimately how to manipulate this model to better understand its structure, dynamics, and interactions. Subject to an appropriate representation of the energy, such as a specifically parameterized empirical force field, the techniques of minimization and Monte Carlo simulation, as well as molecular dynamics (MD) methods, were introduced as a way of sampling conformational space for a better understanding of the relevance of a given model. This discussion highlighted the major limitations with modeling in general. When sampling conformational space effectively, difficult issues are encountered, such as multiple minima or conformational sampling problems, and accurately representing the underlying energy of interaction. In order to provide a realistic model of the underlying energetics for nucleic acids in their native environments, it is crucial to include some representation of solvation (by water) and also to properly treat the electrostatic interactions. These subjects are discussed in detail in this unit. Copyright © 2014 John Wiley & Sons, Inc.

  5. Solvation-based vapour pressure model for (solvent + salt) systems in conjunction with the Antoine equation

    International Nuclear Information System (INIS)

    Senol, Aynur

    2013-01-01

    Highlights: • Vapour pressures of (solvent + salt) systems have been estimated through a solvation-based model. • Two structural forms of the generalized solvation model using the Antoine equation have been performed. • A simplified concentration-dependent vapour pressure model has been also processed. • The model reliability analysis has been performed in terms of a log-ratio objective function. • The reliability of the models has been interpreted in terms of the statistical design factors. -- Abstract: This study deals with modelling the vapour pressure of a (solvent + salt) system on the basis of the principles of LSER. The solvation model framework clarifies the simultaneous impact of several physical variables such as the vapour pressure of a pure solvent estimated by the Antoine equation, the solubility and solvatochromic parameters of the solvent and the physical properties of the ionic salt. It has been analyzed independently the performance of two structural forms of the generalized model, i.e., a relation depending on an integration of the properties of the solvent and the ionic salt and a relation on a reduced property-basis. A simplified concentration-dependent vapour pressure model has been also explored and implemented on the relevant systems. The vapour pressure data of sixteen (solvent + salt) systems have been processed to analyze statistically the reliability of existing models in terms of a log–ratio objective function. The proposed vapour pressure models match relatively well the observed performance, yielding the overall design factors of 1.066 and 1.073 for the solvation-based models with the integrated and reduced properties, and 1.008 for the concentration-based model, respectively

  6. Differential geometry based solvation model II: Lagrangian formulation.

    Science.gov (United States)

    Chen, Zhan; Baker, Nathan A; Wei, G W

    2011-12-01

    Solvation is an elementary process in nature and is of paramount importance to more sophisticated chemical, biological and biomolecular processes. The understanding of solvation is an essential prerequisite for the quantitative description and analysis of biomolecular systems. This work presents a Lagrangian formulation of our differential geometry based solvation models. The Lagrangian representation of biomolecular surfaces has a few utilities/advantages. First, it provides an essential basis for biomolecular visualization, surface electrostatic potential map and visual perception of biomolecules. Additionally, it is consistent with the conventional setting of implicit solvent theories and thus, many existing theoretical algorithms and computational software packages can be directly employed. Finally, the Lagrangian representation does not need to resort to artificially enlarged van der Waals radii as often required by the Eulerian representation in solvation analysis. The main goal of the present work is to analyze the connection, similarity and difference between the Eulerian and Lagrangian formalisms of the solvation model. Such analysis is important to the understanding of the differential geometry based solvation model. The present model extends the scaled particle theory of nonpolar solvation model with a solvent-solute interaction potential. The nonpolar solvation model is completed with a Poisson-Boltzmann (PB) theory based polar solvation model. The differential geometry theory of surfaces is employed to provide a natural description of solvent-solute interfaces. The optimization of the total free energy functional, which encompasses the polar and nonpolar contributions, leads to coupled potential driven geometric flow and PB equations. Due to the development of singularities and nonsmooth manifolds in the Lagrangian representation, the resulting potential-driven geometric flow equation is embedded into the Eulerian representation for the purpose of

  7. A linear solvation energy relationship model of organic chemical partitioning to dissolved organic carbon.

    Science.gov (United States)

    Kipka, Undine; Di Toro, Dominic M

    2011-09-01

    Predicting the association of contaminants with both particulate and dissolved organic matter is critical in determining the fate and bioavailability of chemicals in environmental risk assessment. To date, the association of a contaminant to particulate organic matter is considered in many multimedia transport models, but the effect of dissolved organic matter is typically ignored due to a lack of either reliable models or experimental data. The partition coefficient to dissolved organic carbon (K(DOC)) may be used to estimate the fraction of a contaminant that is associated with dissolved organic matter. Models relating K(DOC) to the octanol-water partition coefficient (K(OW)) have not been successful for many types of dissolved organic carbon in the environment. Instead, linear solvation energy relationships are proposed to model the association of chemicals with dissolved organic matter. However, more chemically diverse K(DOC) data are needed to produce a more robust model. For humic acid dissolved organic carbon, the linear solvation energy relationship predicts log K(DOC) with a root mean square error of 0.43. Copyright © 2011 SETAC.

  8. Where do ions solvate?

    Indian Academy of Sciences (India)

    We study a simple model of ionic solvation inside a water cluster. The cluster is modeled as a spherical dielectric continuum. It is found that unpolarizable ions always prefer the bulk solvation. On the other hand, for polarizable ions, there exists a critical value of polarization above which surface solvation becomes ...

  9. Ab initio joint density-functional theory of solvated electrodes, with model and explicit solvation

    Science.gov (United States)

    Arias, Tomas

    2015-03-01

    the electrochemical context and how it is needed for realistic description of solvated electrode systems [], and how simple ``implicit'' polarized continuum methods fail radically in this context. Finally, we shall present a series of results relevant to battery, supercapacitor, and solar-fuel systems, one of which has led to a recent invention disclosure for improving battery cycle lifetimes. Supported as a part of the Energy Materials Center at Cornell, an Energy Frontier Research Center funded by DOE/BES (award de-sc0001086) and by the New York State Division of Science, Technology and Innovation (NYSTAR, award 60923).

  10. Modelos contínuos do solvente: fundamentos Continuum solvation models: fundamentals

    Directory of Open Access Journals (Sweden)

    Josefredo R. Pliego Jr

    2006-06-01

    Full Text Available Continuum solvation models are nowadays widely used in the modeling of solvent effects and the range of applications goes from the calculation of partition coefficients to chemical reactions in solution. The present work presents a detailed explanation of the physical foundations of continuum models. We discuss the polarization of a dielectric and its representation through the volume and surface polarization charges. The Poisson equation for a dielectric was obtained and we have also derived and discuss the apparent surface charge method and its application for free energy of solvation calculations.

  11. Solvation of monovalent anions in formamide and methanol: Parameterization of the IEF-PCM model

    International Nuclear Information System (INIS)

    Boees, Elvis S.; Bernardi, Edson; Stassen, Hubert; Goncalves, Paulo F.B.

    2008-01-01

    The thermodynamics of solvation for a series of monovalent anions in formamide and methanol has been studied using the polarizable continuum model (PCM). The parameterization of this continuum model was guided by molecular dynamics simulations. The parameterized PCM model predicts the Gibbs free energies of solvation for 13 anions in formamide and 16 anions in methanol in very good agreement with experimental data. Two sets of atomic radii were tested in the definition of the solute cavities in the PCM and their performances are evaluated and discussed. Mean absolute deviations of the calculated free energies of solvation from the experimental values are in the range of 1.3-2.1 kcal/mol

  12. Applications of the solvation parameter model in reversed-phase liquid chromatography.

    Science.gov (United States)

    Poole, Colin F; Lenca, Nicole

    2017-02-24

    The solvation parameter model is widely used to provide insight into the retention mechanism in reversed-phase liquid chromatography, for column characterization, and in the development of surrogate chromatographic models for biopartitioning processes. The properties of the separation system are described by five system constants representing all possible intermolecular interactions for neutral molecules. The general model can be extended to include ions and enantiomers by adding new descriptors to encode the specific properties of these compounds. System maps provide a comprehensive overview of the separation system as a function of mobile phase composition and/or temperature for method development. The solvation parameter model has been applied to gradient elution separations but here theory and practice suggest a cautious approach since the interpretation of system and compound properties derived from its use are approximate. A growing application of the solvation parameter model in reversed-phase liquid chromatography is the screening of surrogate chromatographic systems for estimating biopartitioning properties. Throughout the discussion of the above topics success as well as known and likely deficiencies of the solvation parameter model are described with an emphasis on the role of the heterogeneous properties of the interphase region on the interpretation and understanding of the general retention mechanism in reversed-phase liquid chromatography for porous chemically bonded sorbents. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  14. Affine-response model of molecular solvation of ions: Accurate predictions of asymmetric charging free energies

    Czech Academy of Sciences Publication Activity Database

    Bardhan, J. P.; Jungwirth, Pavel; Makowski, L.

    Roč. 137, č. 12 ( 2012 ), 124101/1-124101/6 ISSN 0021-9606 R&D Projects: GA MŠk LH12001 Institutional research plan: CEZ:AV0Z40550506 Keywords : ion solvation * continuum models * linear response Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  15. The charge-asymmetric nonlocally determined local-electric (CANDLE) solvation model

    Energy Technology Data Exchange (ETDEWEB)

    Sundararaman, Ravishankar; Goddard, William A. [Joint Center for Artificial Photosynthesis, Pasadena, California 91125 (United States)

    2015-02-14

    Many important applications of electronic structure methods involve molecules or solid surfaces in a solvent medium. Since explicit treatment of the solvent in such methods is usually not practical, calculations often employ continuum solvation models to approximate the effect of the solvent. Previous solvation models either involve a parametrization based on atomic radii, which limits the class of applicable solutes, or based on solute electron density, which is more general but less accurate, especially for charged systems. We develop an accurate and general solvation model that includes a cavity that is a nonlocal functional of both solute electron density and potential, local dielectric response on this nonlocally determined cavity, and nonlocal approximations to the cavity-formation and dispersion energies. The dependence of the cavity on the solute potential enables an explicit treatment of the solvent charge asymmetry. With four parameters per solvent, this “CANDLE” model simultaneously reproduces solvation energies of large datasets of neutral molecules, cations, and anions with a mean absolute error of 1.8 kcal/mol in water and 3.0 kcal/mol in acetonitrile.

  16. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  17. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  18. Incorporation of Hydrogen Bond Angle Dependency into the Generalized Solvation Free Energy Density Model.

    Science.gov (United States)

    Ma, Songling; Hwang, Sungbo; Lee, Sehan; Acree, William E; No, Kyoung Tai

    2018-04-23

    To describe the physically realistic solvation free energy surface of a molecule in a solvent, a generalized version of the solvation free energy density (G-SFED) calculation method has been developed. In the G-SFED model, the contribution from the hydrogen bond (HB) between a solute and a solvent to the solvation free energy was calculated as the product of the acidity of the donor and the basicity of the acceptor of an HB pair. The acidity and basicity parameters of a solute were derived using the summation of acidities and basicities of the respective acidic and basic functional groups of the solute, and that of the solvent was experimentally determined. Although the contribution of HBs to the solvation free energy could be evenly distributed to grid points on the surface of a molecule, the G-SFED model was still inadequate to describe the angle dependency of the HB of a solute with a polarizable continuum solvent. To overcome this shortcoming of the G-SFED model, the contribution of HBs was formulated using the geometric parameters of the grid points described in the HB coordinate system of the solute. We propose an HB angle dependency incorporated into the G-SFED model, i.e., the G-SFED-HB model, where the angular-dependent acidity and basicity densities are defined and parametrized with experimental data. The G-SFED-HB model was then applied to calculate the solvation free energies of organic molecules in water, various alcohols and ethers, and the log P values of diverse organic molecules, including peptides and a protein. Both the G-SFED model and the G-SFED-HB model reproduced the experimental solvation free energies with similar accuracy, whereas the distributions of the SFED on the molecular surface calculated by the G-SFED and G-SFED-HB models were quite different, especially for molecules having HB donors or acceptors. Since the angle dependency of HBs was included in the G-SFED-HB model, the SFED distribution of the G-SFED-HB model is well described

  19. Supply chain reliability modelling

    Directory of Open Access Journals (Sweden)

    Eugen Zaitsev

    2012-03-01

    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  20. Refined Dummy Atom Model of Mg(2+) by Simple Parameter Screening Strategy with Revised Experimental Solvation Free Energy.

    Science.gov (United States)

    Jiang, Yang; Zhang, Haiyang; Feng, Wei; Tan, Tianwei

    2015-12-28

    Metal ions play an important role in the catalysis of metalloenzymes. To investigate metalloenzymes via molecular modeling, a set of accurate force field parameters for metal ions is highly imperative. To extend its application range and improve the performance, the dummy atom model of metal ions was refined through a simple parameter screening strategy using the Mg(2+) ion as an example. Using the AMBER ff03 force field with the TIP3P model, the refined model accurately reproduced the experimental geometric and thermodynamic properties of Mg(2+). Compared with point charge models and previous dummy atom models, the refined dummy atom model yields an enhanced performance for producing reliable ATP/GTP-Mg(2+)-protein conformations in three metalloenzyme systems with single or double metal centers. Similar to other unbounded models, the refined model failed to reproduce the Mg-Mg distance and favored a monodentate binding of carboxylate groups, and these drawbacks needed to be considered with care. The outperformance of the refined model is mainly attributed to the use of a revised (more accurate) experimental solvation free energy and a suitable free energy correction protocol. This work provides a parameter screening strategy that can be readily applied to refine the dummy atom models for metal ions.

  1. Hydrophobic ampersand hydrophilic: Theoretical models of solvation for molecular biophysics

    International Nuclear Information System (INIS)

    Pratt, L.R.; Tawa, G.J.; Hummer, G.; Garcia, A.E.; Corcelli, S.A.

    1996-01-01

    Molecular statistical thermodynamic models of hydration for chemistry and biophysics have advanced abruptly in recent years. With liquid water as solvent, salvation phenomena are classified as either hydrophobic or hydrophilic effects. Recent progress in treatment of hydrophilic effects have been motivated by continuum dielectric models interpreted as a modelistic implementation of second order perturbation theory. New results testing that perturbation theory of hydrophilic effects are presented and discussed. Recent progress in treatment of hydrophobic effects has been achieved by applying information theory to discover models of packing effects in dense liquids. The simplest models to which those ideas lead are presented and discussed

  2. Biomolecular electrostatics—I want your solvation (model)

    International Nuclear Information System (INIS)

    Bardhan, Jaydeep P

    2012-01-01

    We review the mathematical and computational foundations for implicit-solvent models in theoretical chemistry and molecular biophysics. These models are valuable theoretical tools for studying the influence of a solvent, often water or an aqueous electrolyte, on a molecular solute such as a protein. Detailed chemical and physical aspects of implicit-solvent models have been addressed in numerous exhaustive reviews, as have numerical algorithms for simulating the most popular models. This work highlights several important conceptual developments, focusing on selected works that spotlight the need for research at the intersections between chemical, biological, mathematical, and computational physics. To introduce the field to computational scientists, we begin by describing the basic theoretical ideas of implicit-solvent models and numerical implementations. We then address practical and philosophical challenges in parameterization, and major advances that speed up calculations (covering continuum theories based on Poisson as well as faster approximate theories such as generalized Born). We briefly describe the main shortcomings of existing models, and survey promising developments that deliver improved realism in a computationally tractable way, i.e. without increasing simulation time significantly. The review concludes with a discussion of ongoing modeling challenges and relevant trends in high-performance computing and computational science. (topical review)

  3. Power transformer reliability modelling

    NARCIS (Netherlands)

    Schijndel, van A.

    2010-01-01

    Problem description Electrical power grids serve to transport and distribute electrical power with high reliability and availability at acceptable costs and risks. These grids play a crucial though preferably invisible role in supplying sufficient power in a convenient form. Today’s society has

  4. Computational 17O-NMR spectroscopy of organic acids and peracids: comparison of solvation models

    International Nuclear Information System (INIS)

    Baggioli, Alberto; Castiglione, Franca; Raos, Guido; Crescenzi, Orlando; Field, Martin J.

    2013-01-01

    We examine several computational strategies for the prediction of the 17 O-NMR shielding constants for a selection of organic acids and peracids in aqueous solution. In particular, we consider water (the solvent and reference for the chemical shifts), hydrogen peroxide, acetic acid, lactic acid and peracetic acid. First of all, we demonstrate that the PBE0 density functional in combination with the 6-311+G(d,p) basis set provides an excellent compromise between computational cost and accuracy in the calculation of the shielding constants. Next, we move on to the problem of the solvent representation. Our results confirm the shortcomings of the Polarizable Continuum Model (PCM) in the description of systems susceptible to strong hydrogen bonding interactions, while at the same time they demonstrate its usefulness within a molecular-continuum approach, whereby PCM is applied to describe the solvation of the solute surrounded by some explicit solvent molecules. We examine different models of the solvation shells, sampling their configurations using both energy minimizations of finite clusters and molecular dynamics simulations of bulk systems. Hybrid molecular dynamics simulations, in which the solute is described at the PM6 semiempirical level and the solvent by the TIP3P model, prove to be a promising sampling method for medium-to-large sized systems. The roles of solvent shell size and structure are also briefly discussed. (authors)

  5. Lieb-Liniger-like model of quantum solvation in CO-4HeN clusters

    Science.gov (United States)

    Farrelly, D.; Iñarrea, M.; Lanchares, V.; Salas, J. P.

    2016-05-01

    Small 4He clusters doped with various molecules allow for the study of "quantum solvation" as a function of cluster size. A peculiarity of quantum solvation is that, as the number of 4He atoms is increased from N = 1, the solvent appears to decouple from the molecule which, in turn, appears to undergo free rotation. This is generally taken to signify the onset of "microscopic superfluidity." Currently, little is known about the quantum mechanics of the decoupling mechanism, mainly because the system is a quantum (N + 1)-body problem in three dimensions which makes computations difficult. Here, a one-dimensional model is studied in which the 4He atoms are confined to revolve on a ring and encircle a rotating CO molecule. The Lanczos algorithm is used to investigate the eigenvalue spectrum as the number of 4He atoms is varied. Substantial solvent decoupling is observed for as few as N = 5 4He atoms. Examination of the Hamiltonian matrix, which has an almost block diagonal structure, reveals increasingly weak inter-block (solvent-molecule) coupling as the number of 4He atoms is increased. In the absence of a dopant molecule the system is similar to a Lieb-Liniger (LL) gas and we find a relatively rapid transition to the LL limit as N is increased. In essence, the molecule initially—for very small N—provides a central, if relatively weak, attraction to organize the cluster; as more 4He atoms are added, the repulsive interactions between the identical bosons start to dominate as the solvation ring (shell) becomes more crowded which causes the molecule to start to decouple. For low N, the molecule pins the atoms in place relative to itself; as N increases the atom-atom repulsion starts to dominate the Hamiltonian and the molecule decouples. We conclude that, while the notion of superfluidity is a useful and correct description of the decoupling process, a molecular viewpoint provides complementary insights into the quantum mechanism of the transition from a molecular

  6. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer L.; Christensen, Anders Steen

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...

  7. Modeling solvation effects in real-space and real-time within density functional approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delgado, Alain [Istituto Nanoscienze - CNR, Centro S3, via Campi 213/A, 41125 Modena (Italy); Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Calle 30 # 502, 11300 La Habana (Cuba); Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea [Istituto Nanoscienze - CNR, Centro S3, via Campi 213/A, 41125 Modena (Italy)

    2015-10-14

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.

  8. Solvation of actinide salts in water using a polarizable continuum model.

    Science.gov (United States)

    Kumar, Narendra; Seminario, Jorge M

    2015-01-29

    In order to determine how actinide atoms are dressed when solvated in water, density functional theory calculations have been carried out to study the equilibrium structure of uranium plutonium and thorium salts (UO2(2+), PuO2(2+), Pu(4+), and Th(4+)) both in vacuum as well as in solution represented by a conductor-like polarizable continuum model. This information is of paramount importance for the development of sensitive nanosensors. Both UO2(2+) and PuO2(2+) ions show coordination number of 4-5 with counterions replacing one or two water molecules from the first coordination shell. On the other hand, Pu(4+), has a coordination number of 8 both when completely solvated and also in the presence of chloride and nitrate ions with counterions replacing water molecules in the first shell. Nitrates were found to bind more strongly to Pu(IV) than chloride anions. In the case of the Th(IV) ion, the coordination number was found to be 9 or 10 in the presence of chlorides. Moreover, the Pu(IV) ion shows greater affinity for chlorides than the Th(IV) ion. Adding dispersion and ZPE corrections to the binding energy does not alter the trends in relative stability of several conformers because of error cancelations. All structures and energetics of these complexes are reported.

  9. Solvation free energies and partition coefficients with the coarse-grained and hybrid all-atom/coarse-grained MARTINI models.

    Science.gov (United States)

    Genheden, Samuel

    2017-10-01

    We present the estimation of solvation free energies of small solutes in water, n-octanol and hexane using molecular dynamics simulations with two MARTINI models at different resolutions, viz. the coarse-grained (CG) and the hybrid all-atom/coarse-grained (AA/CG) models. From these estimates, we also calculate the water/hexane and water/octanol partition coefficients. More than 150 small, organic molecules were selected from the Minnesota solvation database and parameterized in a semi-automatic fashion. Using either the CG or hybrid AA/CG models, we find considerable deviations between the estimated and experimental solvation free energies in all solvents with mean absolute deviations larger than 10 kJ/mol, although the correlation coefficient is between 0.55 and 0.75 and significant. There is also no difference between the results when using the non-polarizable and polarizable water model, although we identify some improvements when using the polarizable model with the AA/CG solutes. In contrast to the estimated solvation energies, the estimated partition coefficients are generally excellent with both the CG and hybrid AA/CG models, giving mean absolute deviations between 0.67 and 0.90 log units and correlation coefficients larger than 0.85. We analyze the error distribution further and suggest avenues for improvements.

  10. Solvation free energies and partition coefficients with the coarse-grained and hybrid all-atom/coarse-grained MARTINI models

    Science.gov (United States)

    Genheden, Samuel

    2017-10-01

    We present the estimation of solvation free energies of small solutes in water, n-octanol and hexane using molecular dynamics simulations with two MARTINI models at different resolutions, viz. the coarse-grained (CG) and the hybrid all-atom/coarse-grained (AA/CG) models. From these estimates, we also calculate the water/hexane and water/octanol partition coefficients. More than 150 small, organic molecules were selected from the Minnesota solvation database and parameterized in a semi-automatic fashion. Using either the CG or hybrid AA/CG models, we find considerable deviations between the estimated and experimental solvation free energies in all solvents with mean absolute deviations larger than 10 kJ/mol, although the correlation coefficient is between 0.55 and 0.75 and significant. There is also no difference between the results when using the non-polarizable and polarizable water model, although we identify some improvements when using the polarizable model with the AA/CG solutes. In contrast to the estimated solvation energies, the estimated partition coefficients are generally excellent with both the CG and hybrid AA/CG models, giving mean absolute deviations between 0.67 and 0.90 log units and correlation coefficients larger than 0.85. We analyze the error distribution further and suggest avenues for improvements.

  11. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  12. Infrared spectroscopy of model electrochemical interfaces in ultrahigh vacuum: some implications for ionic and chemisorbate solvation at electrode surfaces

    Science.gov (United States)

    Villegas, Ignacio; Kizhakevariam, Naushad; Weaver, Michael J.

    1995-07-01

    The utility of infrared reflection-absorption spectroscopy (IRAS) for examining structure and bonding for model electrochemical interfaces in ultrahigh vacuum (UHV) is illustrated, focusing specifically on the solvation of cations and chemisorbed carbon monoxide on Pt(111). These systems were chosen partly in view of the availability of IRAS data (albeit limited to chemisorbate vibrations) for the corresponding in-situ metal-solution interfaces, enabling direct spectral comparisons to be made with the "UHV electrochemical model" systems. Kelvin probe measurements of the metal-UHV surface potential changes (ΔΦ) attending alterations in the interfacial composition are also described: these provide the required link to the in-situ electrode potentials as well as yielding additional insight into surface solvation. Variations in the negative electronic charge density and, correspondingly, in the cation surface concentration (thereby mimicking charge-induced alterations in the electrode potential below the potential of zero charge) are achieved by potassium atom dosage onto Pt(111). Of the solvents selected for discussion here — deuterated water, methanol, and acetonitrile — the first two exhibit readily detectable vibrational bands which provide information on the ionic solvation structure. Progressively dosing these solvents onto Pt(111) in the presence of low potassium coverages yields marked alterations in the solvent vibrational bands which can be understood in terms of sequential cation solvation. Comparison between these spectra for methanol with analogous data for sequential methanol solvation of gas-phase alkali cations enables the influence of the interfacial environment to be assessed. The effects of solvating chemisorbed CO are illustrated for acetonitrile; the markedly larger shifts in CO frequencies and binding sites for dilute CO adlayers can be accounted for in terms of short-range coadsorbate interactions in addition to longer-range Stark effects

  13. Reliability Overhaul Model

    Science.gov (United States)

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  14. Spectroscopic and computational studies of ionic clusters as models of solvation and atmospheric reactions

    Science.gov (United States)

    Kuwata, Keith T.

    Ionic clusters are useful as model systems for the study of fundamental processes in solution and in the atmosphere. Their structure and reactivity can be studied in detail using vibrational predissociation spectroscopy, in conjunction with high level ab initio calculations. This thesis presents the applications of infrared spectroscopy and computation to a variety of gas-phase cluster systems. A crucial component of the process of stratospheric ozone depletion is the action of polar stratospheric clouds (PSCs) to convert the reservoir species HCl and chlorine nitrate (ClONO2) to photochemically labile compounds. Quantum chemistry was used to explore one possible mechanism by which this activation is effected: Cl- + ClONO2 /to Cl2 + NO3- eqno(1)Correlated ab initio calculations predicted that the direct reaction of chloride ion with ClONO2 is facile, which was confirmed in an experimental kinetics study. In the reaction a weakly bound intermediate Cl2-NO3- is formed, with ~70% of the charge localized on the nitrate moiety. This enables the Cl2-NO3- cluster to be well solvated even in bulk solution, allowing (1) to be facile on PSCs. Quantum chemistry was also applied to the hydration of nitrosonium ion (NO+), an important process in the ionosphere. The calculations, in conjunction with an infrared spectroscopy experiment, revealed the structure of the gas-phase clusters NO+(H2O)n. The large degree of covalent interaction between NO+ and the lone pairs of the H2O ligands is contrasted with the weak electrostatic bonding between iodide ion and H2O. Finally, the competition between ion solvation and solvent self-association is explored for the gas-phase clusters Cl/-(H2O)n and Cl-(NH3)n. For the case of water, vibrational predissociation spectroscopy reveals less hydrogen bonding among H2O ligands than predicted by ab initio calculations. Nevertheless, for n /ge 5, cluster structure is dominated by water-water interactions, with Cl- only partially solvated by the

  15. Solvation thermodynamics

    CERN Document Server

    Ben-Naim, Arieh

    1987-01-01

    This book deals with a subject that has been studied since the beginning of physical chemistry. Despite the thousands of articles and scores of books devoted to solvation thermodynamics, I feel that some fundamen­ tal and well-established concepts underlying the traditional approach to this subject are not satisfactory and need revision. The main reason for this need is that solvation thermodynamics has traditionally been treated in the context of classical (macroscopic) ther­ modynamics alone. However, solvation is inherently a molecular pro­ cess, dependent upon local rather than macroscopic properties of the system. Therefore, the starting point should be based on statistical mechanical methods. For many years it has been believed that certain thermodynamic quantities, such as the standard free energy (or enthalpy or entropy) of solution, may be used as measures of the corresponding functions of solvation of a given solute in a given solvent. I first challenged this notion in a paper published in 1978 b...

  16. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...... to one third of the total cost of energy. Reduction of Operation & Maintenance costs will result in significant cost savings and result in cheaper electricity production. Operation & Maintenance processes mainly involve actions related to replacements or repair. Identifying the right times when...

  17. Continuum model of non-equilibrium solvation and solvent effect on ultra-fast processes

    International Nuclear Information System (INIS)

    Li Xiangyuan; Fu Kexiang; Zhu Quan

    2006-01-01

    In the past 50 years, non-equilibrium solvation theory for ultra-fast processes such as electron transfer and light absorption/emission has attracted particular interest. A great deal of research efforts was made in this area and various models which give reasonable qualitative descriptions for such as solvent reorganization energy in electron transfer and spectral shift in solution, were developed within the framework of continuous medium theory. In a series of publications by the authors, we clarified that the expression of the non-equilibrium electrostatic free energy that is at the dominant position of non-equilibrium solvation and serves as the basis of various models, however, was incorrectly formulated. In this work, the authors argue that reversible charging work integration was inappropriately applied in the past to an irreversible path linking the equilibrium or the non-equilibrium state. Because the step from the equilibrium state to the nonequilibrium state is factually thermodynamically irreversible, the conventional expression for non-equilibrium free energy that was deduced in different ways is unreasonable. Here the authors derive the non-equilibrium free energy to a quite different form according to Jackson integral formula. Such a difference throws doubts to the models including the famous Marcus two-sphere model for solvent reorganization energy of electron transfer and the Lippert-Mataga equation for spectral shift. By introducing the concept of 'spring energy' arising from medium polarizations, the energy constitution of the non-equilibrium state is highlighted. For a solute-solvent system, the authors separate the total electrostatic energy into different components: the self-energies of solute charge and polarized charge, the interaction energy between them and the 'spring energy' of the solvent polarization. With detailed reasoning and derivation, our formula for non-equilibrium free energy can be reached through different ways. Based on the

  18. Proposed Reliability/Cost Model

    Science.gov (United States)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  19. Reliability in the Rasch Model

    Czech Academy of Sciences Publication Activity Database

    Martinková, Patrícia; Zvára, K.

    2007-01-01

    Roč. 43, č. 3 (2007), s. 315-326 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : Cronbach's alpha * Rasch model * reliability Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.552, year: 2007 http://dml.cz/handle/10338.dmlcz/135776

  20. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  1. Solvates of silico-12-molybdic acid with alcohols

    International Nuclear Information System (INIS)

    Punchuk, I.N.; Chuvaev, V.F.

    1984-01-01

    With the aim of investigating interaction processes of solid heteropolyacids and organic compounds, solvates are prepared. Solvates are products of adding gaseous methanol, ethanol and isopropanol to silico-12-molybdic acid. The compounds are studied by IR and PMR spectroscopy methods. Possible models for solvate structure are considered, as well as their connection with solvate properties and thermal decomposition

  2. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program

    DEFF Research Database (Denmark)

    Svendsen, Casper Steinmann; Blædel, Kristoffer; Christensen, Anders S

    2013-01-01

    An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules such as ubiq......An interface between semi-empirical methods and the polarized continuum model (PCM) of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41). The interface includes energy gradients and is parallelized. For large molecules...... such as ubiquitin a reasonable speedup (up to a factor of six) is observed for up to 16 cores. The SCF convergence is greatly improved by PCM for proteins compared to the gas phase....

  3. Affine-response model of molecular solvation of ions: Accurate predictions of asymmetric charging free energies.

    Science.gov (United States)

    Bardhan, Jaydeep P; Jungwirth, Pavel; Makowski, Lee

    2012-09-28

    Two mechanisms have been proposed to drive asymmetric solvent response to a solute charge: a static potential contribution similar to the liquid-vapor potential, and a steric contribution associated with a water molecule's structure and charge distribution. In this work, we use free-energy perturbation molecular-dynamics calculations in explicit water to show that these mechanisms act in complementary regimes; the large static potential (∼44 kJ/mol/e) dominates asymmetric response for deeply buried charges, and the steric contribution dominates for charges near the solute-solvent interface. Therefore, both mechanisms must be included in order to fully account for asymmetric solvation in general. Our calculations suggest that the steric contribution leads to a remarkable deviation from the popular "linear response" model in which the reaction potential changes linearly as a function of charge. In fact, the potential varies in a piecewise-linear fashion, i.e., with different proportionality constants depending on the sign of the charge. This discrepancy is significant even when the charge is completely buried, and holds for solutes larger than single atoms. Together, these mechanisms suggest that implicit-solvent models can be improved using a combination of affine response (an offset due to the static potential) and piecewise-linear response (due to the steric contribution).

  4. Affine-response model of molecular solvation of ions: Accurate predictions of asymmetric charging free energies

    Science.gov (United States)

    Bardhan, Jaydeep P.; Jungwirth, Pavel; Makowski, Lee

    2012-01-01

    Two mechanisms have been proposed to drive asymmetric solvent response to a solute charge: a static potential contribution similar to the liquid-vapor potential, and a steric contribution associated with a water molecule's structure and charge distribution. In this work, we use free-energy perturbation molecular-dynamics calculations in explicit water to show that these mechanisms act in complementary regimes; the large static potential (∼44 kJ/mol/e) dominates asymmetric response for deeply buried charges, and the steric contribution dominates for charges near the solute-solvent interface. Therefore, both mechanisms must be included in order to fully account for asymmetric solvation in general. Our calculations suggest that the steric contribution leads to a remarkable deviation from the popular “linear response” model in which the reaction potential changes linearly as a function of charge. In fact, the potential varies in a piecewise-linear fashion, i.e., with different proportionality constants depending on the sign of the charge. This discrepancy is significant even when the charge is completely buried, and holds for solutes larger than single atoms. Together, these mechanisms suggest that implicit-solvent models can be improved using a combination of affine response (an offset due to the static potential) and piecewise-linear response (due to the steric contribution). PMID:23020318

  5. Interface of the polarizable continuum model of solvation with semi-empirical methods in the GAMESS program.

    Directory of Open Access Journals (Sweden)

    Casper Steinmann

    Full Text Available An interface between semi-empirical methods and the polarized continuum model (PCM of solvation successfully implemented into GAMESS following the approach by Chudinov et al (Chem. Phys. 1992, 160, 41. The interface includes energy gradients and is parallelized. For large molecules such as ubiquitin a reasonable speedup (up to a factor of six is observed for up to 16 cores. The SCF convergence is greatly improved by PCM for proteins compared to the gas phase.

  6. Generalized linear solvation energy model applied to solute partition coefficients in ionic liquid-supercritical carbon dioxide systems

    Czech Academy of Sciences Publication Activity Database

    Planeta, Josef; Karásek, Pavel; Hohnová, Barbora; Šťavíková, Lenka; Roth, Michal

    2012-01-01

    Roč. 1250, SI (2012), s. 54-62 ISSN 0021-9673 R&D Projects: GA ČR(CZ) GAP206/11/0138; GA ČR(CZ) GAP106/12/0522; GA ČR(CZ) GPP503/11/P523 Institutional research plan: CEZ:AV0Z40310501 Keywords : ionic liquid * supercritical carbon dioxide * solvation energy model Subject RIV: BJ - Thermodynamics Impact factor: 4.612, year: 2012

  7. Ionic Solution: What Goes Right and Wrong with Continuum Solvation Modeling.

    Science.gov (United States)

    Wang, Changhao; Ren, Pengyu; Luo, Ray

    2017-12-14

    Solvent-mediated electrostatic interactions were well recognized to be important in the structure and function of molecular systems. Ionic interaction is an important component in electrostatic interactions, especially in highly charged molecules, such as nucleic acids. Here, we focus on the quality of the widely used Poisson-Boltzmann surface area (PBSA) continuum models in modeling ionic interactions by comparing with both explicit solvent simulations and the experiment. In this work, the molality-dependent chemical potentials for sodium chloride (NaCl) electrolyte were first simulated in the SPC/E explicit solvent. Our high-quality simulation agrees well with both the previous study and the experiment. Given the free-energy simulations in SPC/E as the benchmark, we used the same sets of snapshots collected in the SPC/E solvent model for PBSA free-energy calculations in the hope to achieve the maximum consistency between the two solvent models. Our comparative analysis shows that the molality-dependent chemical potentials of NaCl were reproduced well with both linear PB and nonlinear PB methods, although nonlinear PB agrees better with SPC/E and the experiment. Our free-energy simulations also show that the presence of salt increases the hydrophobic effect in a nonlinear fashion, in qualitative agreement with previous theoretical studies of Onsager and Samaras. However, the lack of molality-dependency in the nonelectrostatics continuum models dramatically reduces the overall quality of PBSA methods in modeling salt-dependent energetics. These analyses point to further improvements needed for more robust modeling of solvent-mediated interactions by the continuum solvation frameworks.

  8. Comparison of the Marcus and Pekar partitions in the context of non-equilibrium, polarizable-continuum solvation models

    International Nuclear Information System (INIS)

    You, Zhi-Qiang; Herbert, John M.; Mewes, Jan-Michael; Dreuw, Andreas

    2015-01-01

    The Marcus and Pekar partitions are common, alternative models to describe the non-equilibrium dielectric polarization response that accompanies instantaneous perturbation of a solute embedded in a dielectric continuum. Examples of such a perturbation include vertical electronic excitation and vertical ionization of a solution-phase molecule. Here, we provide a general derivation of the accompanying polarization response, for a quantum-mechanical solute described within the framework of a polarizable continuum model (PCM) of electrostatic solvation. Although the non-equilibrium free energy is formally equivalent within the two partitions, albeit partitioned differently into “fast” versus “slow” polarization contributions, discretization of the PCM integral equations fails to preserve certain symmetries contained in these equations (except in the case of the conductor-like models or when the solute cavity is spherical), leading to alternative, non-equivalent matrix equations. Unlike the total equilibrium solvation energy, however, which can differ dramatically between different formulations, we demonstrate that the equivalence of the Marcus and Pekar partitions for the non-equilibrium solvation correction is preserved to high accuracy. Differences in vertical excitation and ionization energies are <0.2 eV (and often <0.01 eV), even for systems specifically selected to afford a large polarization response. Numerical results therefore support the interchangeability of the Marcus and Pekar partitions, but also caution against relying too much on the fast PCM charges for interpretive value, as these charges differ greatly between the two partitions, especially in polar solvents

  9. Comparison of the Marcus and Pekar partitions in the context of non-equilibrium, polarizable-continuum solvation models

    Energy Technology Data Exchange (ETDEWEB)

    You, Zhi-Qiang; Herbert, John M., E-mail: herbert@chemistry.ohio-state.edu [Department of Chemistry and Biochemistry, The Ohio State University, Columbus, Ohio 43210 (United States); Mewes, Jan-Michael; Dreuw, Andreas [Interdisciplinary Center for Scientific Computing, Ruprechts-Karls University, Im Neuenheimer Feld 368, 69120 Heidelberg (Germany)

    2015-11-28

    The Marcus and Pekar partitions are common, alternative models to describe the non-equilibrium dielectric polarization response that accompanies instantaneous perturbation of a solute embedded in a dielectric continuum. Examples of such a perturbation include vertical electronic excitation and vertical ionization of a solution-phase molecule. Here, we provide a general derivation of the accompanying polarization response, for a quantum-mechanical solute described within the framework of a polarizable continuum model (PCM) of electrostatic solvation. Although the non-equilibrium free energy is formally equivalent within the two partitions, albeit partitioned differently into “fast” versus “slow” polarization contributions, discretization of the PCM integral equations fails to preserve certain symmetries contained in these equations (except in the case of the conductor-like models or when the solute cavity is spherical), leading to alternative, non-equivalent matrix equations. Unlike the total equilibrium solvation energy, however, which can differ dramatically between different formulations, we demonstrate that the equivalence of the Marcus and Pekar partitions for the non-equilibrium solvation correction is preserved to high accuracy. Differences in vertical excitation and ionization energies are <0.2 eV (and often <0.01 eV), even for systems specifically selected to afford a large polarization response. Numerical results therefore support the interchangeability of the Marcus and Pekar partitions, but also caution against relying too much on the fast PCM charges for interpretive value, as these charges differ greatly between the two partitions, especially in polar solvents.

  10. The cavity electromagnetic field within the polarizable continuum model of solvation

    Energy Technology Data Exchange (ETDEWEB)

    Pipolo, Silvio, E-mail: silvio.pipolo@nano.cnr.it [Center S3, CNR Institute of Nanoscience, Modena (Italy); Department of Physics, University of Modena and Reggio Emilia, Modena (Italy); Corni, Stefano, E-mail: stefano.corni@nano.cnr.it [Center S3, CNR Institute of Nanoscience, Modena (Italy); Cammi, Roberto, E-mail: roberto.cammi@unipr.it [Department of Chemistry, Università degli studi di Parma, Parma (Italy)

    2014-04-28

    Cavity field effects can be defined as the consequences of the solvent polarization induced by the probing electromagnetic field upon spectroscopies of molecules in solution, and enter in the definitions of solute response properties. The polarizable continuum model of solvation (PCM) has been extended in the past years to address the cavity-field issue through the definition of an effective dipole moment that couples to the external electromagnetic field. We present here a rigorous derivation of such cavity-field treatment within the PCM starting from the general radiation-matter Hamiltonian within inhomogeneous dielectrics and recasting the interaction term to a dipolar form within the long wavelength approximation. To this aim we generalize the Göppert-Mayer and Power-Zienau-Woolley gauge transformations, usually applied in vacuo, to the case of a cavity vector potential. Our derivation also allows extending the cavity-field correction in the long-wavelength limit to the velocity gauge through the definition of an effective linear momentum operator. Furthermore, this work sets the basis for the general PCM treatment of the electromagnetic cavity field, capable to describe the radiation-matter interaction in dielectric media beyond the long-wavelength limit, providing also a tool to investigate spectroscopic properties of more complex systems such as molecules close to large nanoparticles.

  11. Examination of hydrogen-bonding interactions between dissolved solutes and alkylbenzene solvents based on Abraham model correlations derived from measured enthalpies of solvation

    Energy Technology Data Exchange (ETDEWEB)

    Varfolomeev, Mikhail A.; Rakipov, Ilnaz T. [Chemical Institute, Kazan Federal University, Kremlevskaya 18, Kazan 420008 (Russian Federation); Acree, William E., E-mail: acree@unt.edu [Department of Chemistry, 1155 Union Circle # 305070, University of North Texas, Denton, TX 76203-5017 (United States); Brumfield, Michela [Department of Chemistry, 1155 Union Circle # 305070, University of North Texas, Denton, TX 76203-5017 (United States); Abraham, Michael H. [Department of Chemistry, University College London, 20 Gordon Street, London WC1H 0AJ (United Kingdom)

    2014-10-20

    Highlights: • Enthalpies of solution measured for 48 solutes dissolved in mesitylene. • Enthalpies of solution measured for 81 solutes dissolved in p-xylene. • Abraham model correlations derived for enthalpies of solvation of solutes in mesitylene. • Abraham model correlations derived for enthalpies of solvation of solutes in p-xylene. • Hydrogen-bonding enthalpies reported for interactions of aromatic hydrocarbons with hydrogen-bond acidic solutes. - Abstract: Enthalpies of solution at infinite dilution of 48 organic solutes in mesitylene and 81 organic solutes in p-xylene were measured using isothermal solution calorimeter. Enthalpies of solvation for 92 organic vapors and gaseous solutes in mesitylene and for 130 gaseous compounds in p-xylene were determined from the experimental and literature data. Abraham model correlations are determined from the experimental enthalpy of solvation data. The derived correlations describe the experimental gas-to-mesitylene and gas-to-p-xylene solvation enthalpies to within average standard deviations of 1.87 kJ mol{sup −1} and 2.08 kJ mol{sup −1}, respectively. Enthalpies of X-H⋯π (X-O, N, and C) hydrogen bond formation of proton donor solutes (alcohols, amines, chlorinated hydrocarbons etc.) with mesitylene and p-xylene were calculated based on the Abraham solvation equation. Obtained values are in good agreement with the results determined using conventional methods.

  12. 17O NMR Studies of the Solvation State of cis/trans Isomers of Amides and Model Protected Peptides

    Science.gov (United States)

    Gerothanassis, Ioannis P.; Vakka, Constantina; Troganis, Anastasios

    1996-06-01

    17O shielding constants have been utilized to investigate solvation differences of the cis/trans isomers ofN-methylformamide (NMF),N-ethylformamide (NEF), andtert-butylformamide (TBF) in a variety of solvents with particular emphasis on aqueous solution. Comparisons are also made with protected peptides of the formulas CH3CO-YOH, CH3CO-Y-NHR (Y = Pro, Sar), and CH3CO-Y-Z-NHR (Y = Pro; Z =D-Ala) selectively enriched in17O at the acetyl oxygen atom. Hydration at the amide oxygen induces large and specific modifications of the17O shielding constants, which are practically the same for the cis and trans isomers of NMF, NEF, and the protected peptides. Fortert-butylformamide, the strong deshielding of the trans isomer compared to that of the cis isomer may be attributed to an out-of-plane (torsion-angle) deformation of the amide bond and/or a significant reduction of solvation of the trans isomer due to steric inhibition of the bulkytert-butyl group. Good linear correlation between δ(17O) of amides and δ(17O) of acetone was found for different solvents which have varying dielectric constants and solvation abilities. Sum-over-states calculations, within the solvaton model, underestimate effects of the dielectric constant of the medium on17O shielding, while finite-perturbation-theory calculations give good agreement with the experiment.

  13. 17O NMR Studies of the Solvation State of cissolidustrans Isomers of Amides and Model Protected Peptides

    Science.gov (United States)

    Gerothanassis; Vakka; Troganis

    1996-06-01

    17O shielding constants have been utilized to investigate solvation differences of the cissolidustrans isomers of N-methylformamide (NMF), N-ethylformamide (NEF), and tert-butylformamide (TBF) in a variety of solvents with particular emphasis on aqueous solution. Comparisons are also made with protected peptides of the formulas CH3CO-YOH, CH3CO-Y-NHR (Y = Pro, Sar), and CH3CO-Y-Z-NHR (Y = Pro; Z = D-Ala) selectively enriched in 17O at the acetyl oxygen atom. Hydration at the amide oxygen induces large and specific modifications of the 17O shielding constants, which are practically the same for the cis and trans isomers of NMF, NEF, and the protected peptides. For tert-butylformamide, the strong deshielding of the trans isomer compared to that of the cis isomer may be attributed to an out-of-plane (torsion-angle) deformation of the amide bond andsolidusor a significant reduction of solvation of the trans isomer due to steric inhibition of the bulky tert-butyl group. Good linear correlation between delta(17O) of amides and delta(17O) of acetone was found for different solvents which have varying dielectric constants and solvation abilities. Sum-over-states calculations, within the solvaton model, underestimate effects of the dielectric constant of the medium on 17O shielding, while finite-perturbation-theory calculations give good agreement with the experiment.

  14. Extending the Solvation-Layer Interface Condition Continum Electrostatic Model to a Linearized Poisson-Boltzmann Solvent.

    Science.gov (United States)

    Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Cooper, Christopher D; Knepley, Matthew G; Bardhan, Jaydeep P

    2017-06-13

    We extend the linearized Poisson-Boltzmann (LPB) continuum electrostatic model for molecular solvation to address charge-hydration asymmetry. Our new solvation-layer interface condition (SLIC)/LPB corrects for first-shell response by perturbing the traditional continuum-theory interface conditions at the protein-solvent and the Stern-layer interfaces. We also present a GPU-accelerated treecode implementation capable of simulating large proteins, and our results demonstrate that the new model exhibits significant accuracy improvements over traditional LPB models, while reducing the number of fitting parameters from dozens (atomic radii) to just five parameters, which have physical meanings related to first-shell water behavior at an uncharged interface. In particular, atom radii in the SLIC model are not optimized but uniformly scaled from their Lennard-Jones radii. Compared to explicit-solvent free-energy calculations of individual atoms in small molecules, SLIC/LPB is significantly more accurate than standard parametrizations (RMS error 0.55 kcal/mol for SLIC, compared to RMS error of 3.05 kcal/mol for standard LPB). On parametrizing the electrostatic model with a simple nonpolar component for total molecular solvation free energies, our model predicts octanol/water transfer free energies with an RMS error 1.07 kcal/mol. A more detailed assessment illustrates that standard continuum electrostatic models reproduce total charging free energies via a compensation of significant errors in atomic self-energies; this finding offers a window into improving the accuracy of Generalized-Born theories and other coarse-grained models. Most remarkably, the SLIC model also reproduces positive charging free energies for atoms in hydrophobic groups, whereas standard PB models are unable to generate positive charging free energies regardless of the parametrized radii. The GPU-accelerated solver is freely available online, as is a MATLAB implementation.

  15. Quantum chemical approach for condensed-phase thermochemistry (V): Development of rigid-body type harmonic solvation model

    Science.gov (United States)

    Tarumi, Moto; Nakai, Hiromi

    2018-05-01

    This letter proposes an approximate treatment of the harmonic solvation model (HSM) assuming the solute to be a rigid body (RB-HSM). The HSM method can appropriately estimate the Gibbs free energy for condensed phases even where an ideal gas model used by standard quantum chemical programs fails. The RB-HSM method eliminates calculations for intra-molecular vibrations in order to reduce the computational costs. Numerical assessments indicated that the RB-HSM method can evaluate entropies and internal energies with the same accuracy as the HSM method but with lower calculation costs.

  16. Performance of the SMD and SM8 models for predicting solvation free energy of neutral solutes in methanol, dimethyl sulfoxide and acetonitrile

    Science.gov (United States)

    Zanith, Caroline C.; Pliego, Josefredo R.

    2015-03-01

    The continuum solvation models SMD and SM8 were developed using 2,346 solvation free energy values for 318 neutral molecules in 91 solvents as reference. However, no solvation data of neutral solutes in methanol was used in the parametrization, while only few solvation free energy values of solutes in dimethyl sulfoxide and acetonitrile were used. In this report, we have tested the performance of the models for these important solvents. Taking data from literature, we have generated solvation free energy, enthalpy and entropy values for 37 solutes in methanol, 21 solutes in dimethyl sulfoxide and 19 solutes in acetonitrile. Both SMD and SM8 models have presented a good performance in methanol and acetonitrile, with mean unsigned error equal or less than 0.66 and 0.55 kcal mol-1 in methanol and acetonitrile, respectively. However, the correlation is worse in dimethyl sulfoxide, where the SMD and SM8 methods present mean unsigned error of 1.02 and 0.95 kcal mol-1, respectively. Our results point out the SMx family of models need be improved for dimethyl sulfoxide solvent.

  17. Atomistic characterization of the active-site solvation dynamics of a model photocatalyst

    DEFF Research Database (Denmark)

    Brandt van Driel, Tim; Kjær, Kasper Skov; Hartsock, Robert W.

    2016-01-01

    The interactions between the reactive excited state of molecular photocatalysts and surrounding solvent dictate reaction mechanisms and pathways, but are not readily accessible to conventional optical spectroscopic techniques. Here we report an investigation of the structural and solvation dynami...... of the iridium atoms by the acetonitrile solvent and demonstrate the viability of using diffuse X-ray scattering at free-electron laser sources for studying the dynamics of photocatalysis....

  18. Development of reliable pavement models.

    Science.gov (United States)

    2011-05-01

    The current report proposes a framework for estimating the reliability of a given pavement structure as analyzed by : the Mechanistic-Empirical Pavement Design Guide (MEPDG). The methodology proposes using a previously fit : response surface, in plac...

  19. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  20. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  1. Reliability models for Space Station power system

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kim, Y.; Wagner, H.

    1987-01-01

    This paper presents a methodology for the reliability evaluation of Space Station power system. The two options considered are the photovoltaic system and the solar dynamic system. Reliability models for both of these options are described along with the methodology for calculating the reliability indices.

  2. Reliability and continuous regeneration model

    Directory of Open Access Journals (Sweden)

    Anna Pavlisková

    2006-06-01

    Full Text Available The failure-free function of an object is very important for the service. This leads to the interest in the determination of the object reliability and failure intensity. The reliability of an element is defined by the theory of probability.The element durability T is a continuous random variate with the probability density f. The failure intensity (tλ is a very important reliability characteristics of the element. Often it is an increasing function, which corresponds to the element ageing. We disposed of the data about a belt conveyor failures recorded during the period of 90 months. The given ses behaves according to the normal distribution. By using a mathematical analysis and matematical statistics, we found the failure intensity function (tλ. The function (tλ increases almost linearly.

  3. Probing the role of interfacial waters in protein-DNA recognition using a hybrid implicit/explicit solvation model

    Science.gov (United States)

    Li, Shen; Bradley, Philip

    2013-01-01

    When proteins bind to their DNA target sites, ordered water molecules are often present at the protein-DNA interface bridging protein and DNA through hydrogen bonds. What is the role of these ordered interfacial waters? Are they important determinants of the specificity of DNA sequence recognition, or do they act in binding in a primarily non-specific manner, by improving packing of the interface, shielding unfavorable electrostatic interactions, and solvating unsatisfied polar groups that are inaccessible to bulk solvent? When modeling details of structure and binding preferences, can fully implicit solvent models be fruitfully applied to protein-DNA interfaces, or must the individualistic properties of these interfacial waters be accounted for? To address these questions, we have developed a hybrid implicit/explicit solvation model that specifically accounts for the locations and orientations of small numbers of DNA-bound water molecules while treating the majority of the solvent implicitly. Comparing the performance of this model to its fully implicit counterpart, we find that explicit treatment of interfacial waters results in a modest but significant improvement in protein sidechain placement and DNA sequence recovery. Base-by-base comparison of the performance of the two models highlights DNA sequence positions whose recognition may be dependent on interfacial water. Our study offers large-scale statistical evidence for the role of ordered water for protein DNA recognition, together with detailed examination of several well-characterized systems. In addition, our approach provides a template for modeling explicit water molecules at interfaces that should be extensible to other systems. PMID:23444044

  4. A simple model for solvation in mixed solvents. Applications to the stabilization and destabilization of macromolecular structures.

    Science.gov (United States)

    Schellman, J A

    1990-08-31

    The properties of a simple model for solvation in mixed solvents are explored in this paper. The model is based on the supposition that solvent replacement is a simple one-for-one substitution reaction at macromolecular sites which are independent of one another. This leads to a new form for the binding polynomial in which all terms are associated with ligand interchange rather than ligand addition. The principal solvent acts as one of the ligands. Thermodynamic analysis then shows that thermodynamic binding (i.e., selective interaction) depends on the properties of K'-1, whereas stoichiometric binding (site occupation) depends on K'. K' is a 'practical' interchange equilibrium constant given by (f3/f1)K, where K is the true equilibrium constant for the interchange of components 3 and 1 on the site and f3 and f4 denote their respective activity coefficients on the mole fraction scale. Values of K' less than unity lead to negative selective interaction. It is selective interaction and not occupation number which determines the thermodynamic effects of solvation. When K' greater than 100 on the mole fraction scale or K' greater than 2 on the molality scale (in water), the differences between stoichiometric binding and selective interaction become less than 1%. The theory of this paper is therefore necessary only for very weak binding constants. When K'-1 is small, large concentrations of the added solvent component are required to produce a thermodynamic effect. Under these circumstances the isotherms for the selective interaction and for the excess (or transfer) free energy are strongly dependent on the behavior of the activity coefficients of both solvent components. Two classes of behavior are described depending on whether the components display positive or negative deviations from Raoult's law. Examples which are discussed are aqueous solutions of urea and guanidinium chloride for positive deviations and of sucrose and glucose for negative deviations

  5. A molecular Debye-Hückel theory of solvation in polar fluids: An extension of the Born model

    Science.gov (United States)

    Xiao, Tiejun; Song, Xueyu

    2017-12-01

    A dielectric response theory of solvation beyond the conventional Born model for polar fluids is presented. The dielectric response of a polar fluid is described by a Born response mode and a linear combination of Debye-Hückel-like response modes that capture the nonlocal response of polar fluids. The Born mode is characterized by a bulk dielectric constant, while a Debye-Hückel mode is characterized by its corresponding Debye screening length. Both the bulk dielectric constant and the Debye screening lengths are determined from the bulk dielectric function of the polar fluid. The linear combination coefficients of the response modes are evaluated in a self-consistent way and can be used to evaluate the electrostatic contribution to the thermodynamic properties of a polar fluid. Our theory is applied to a dipolar hard sphere fluid as well as interaction site models of polar fluids such as water, where the electrostatic contribution to their thermodynamic properties can be obtained accurately.

  6. Computing the Absorption and Emission Spectra of 5-Methylcytidine in Different Solvents: A Test-Case for Different Solvation Models.

    Science.gov (United States)

    Martínez-Fernández, L; Pepino, A J; Segarra-Martí, J; Banyasz, A; Garavelli, M; Improta, R

    2016-09-13

    The optical spectra of 5-methylcytidine in three different solvents (tetrahydrofuran, acetonitrile, and water) is measured, showing that both the absorption and the emission maximum in water are significantly blue-shifted (0.08 eV). The absorption spectra are simulated based on CAM-B3LYP/TD-DFT calculations but including solvent effects with three different approaches: (i) a hybrid implicit/explicit full quantum mechanical approach, (ii) a mixed QM/MM static approach, and (iii) a QM/MM method exploiting the structures issuing from molecular dynamics classical simulations. Ab-initio Molecular dynamics simulations based on CAM-B3LYP functionals have also been performed. The adopted approaches all reproduce the main features of the experimental spectra, giving insights on the chemical-physical effects responsible for the solvent shifts in the spectra of 5-methylcytidine and providing the basis for discussing advantages and limitations of the adopted solvation models.

  7. Lieb-Liniger-like model of quantum solvation in CO-{sup 4}He{sub N} clusters

    Energy Technology Data Exchange (ETDEWEB)

    Farrelly, D. [Departamento de Matemáticas y Computación, Universidad de La Rioja, 26006 Logroño (Spain); Department of Chemistry and Biochemistry, Utah State University, Logan, Utah 84322-0300 (United States); Iñarrea, M.; Salas, J. P. [Área de Física Aplicada, Universidad de La Rioja, 26006 Logroño (Spain); Lanchares, V. [Department of Chemistry and Biochemistry, Utah State University, Logan, Utah 84322-0300 (United States)

    2016-05-28

    Small {sup 4}He clusters doped with various molecules allow for the study of “quantum solvation” as a function of cluster size. A peculiarity of quantum solvation is that, as the number of {sup 4}He atoms is increased from N = 1, the solvent appears to decouple from the molecule which, in turn, appears to undergo free rotation. This is generally taken to signify the onset of “microscopic superfluidity.” Currently, little is known about the quantum mechanics of the decoupling mechanism, mainly because the system is a quantum (N + 1)-body problem in three dimensions which makes computations difficult. Here, a one-dimensional model is studied in which the {sup 4}He atoms are confined to revolve on a ring and encircle a rotating CO molecule. The Lanczos algorithm is used to investigate the eigenvalue spectrum as the number of {sup 4}He atoms is varied. Substantial solvent decoupling is observed for as few as N = 5 {sup 4}He atoms. Examination of the Hamiltonian matrix, which has an almost block diagonal structure, reveals increasingly weak inter-block (solvent-molecule) coupling as the number of {sup 4}He atoms is increased. In the absence of a dopant molecule the system is similar to a Lieb-Liniger (LL) gas and we find a relatively rapid transition to the LL limit as N is increased. In essence, the molecule initially—for very small N—provides a central, if relatively weak, attraction to organize the cluster; as more {sup 4}He atoms are added, the repulsive interactions between the identical bosons start to dominate as the solvation ring (shell) becomes more crowded which causes the molecule to start to decouple. For low N, the molecule pins the atoms in place relative to itself; as N increases the atom-atom repulsion starts to dominate the Hamiltonian and the molecule decouples. We conclude that, while the notion of superfluidity is a useful and correct description of the decoupling process, a molecular viewpoint provides complementary insights into the

  8. Improving accuracy of electrochemical capacitance and solvation energetics in first-principles calculations

    Science.gov (United States)

    Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.

    2018-04-01

    Reliable first-principles calculations of electrochemical processes require accurate prediction of the interfacial capacitance, a challenge for current computationally efficient continuum solvation methodologies. We develop a model for the double layer of a metallic electrode that reproduces the features of the experimental capacitance of Ag(100) in a non-adsorbing, aqueous electrolyte, including a broad hump in the capacitance near the potential of zero charge and a dip in the capacitance under conditions of low ionic strength. Using this model, we identify the necessary characteristics of a solvation model suitable for first-principles electrochemistry of metal surfaces in non-adsorbing, aqueous electrolytes: dielectric and ionic nonlinearity, and a dielectric-only region at the interface. The dielectric nonlinearity, caused by the saturation of dipole rotational response in water, creates the capacitance hump, while ionic nonlinearity, caused by the compactness of the diffuse layer, generates the capacitance dip seen at low ionic strength. We show that none of the previously developed solvation models simultaneously meet all these criteria. We design the nonlinear electrochemical soft-sphere solvation model which both captures the capacitance features observed experimentally and serves as a general-purpose continuum solvation model.

  9. Reliability Modeling of Double Beam Bridge Crane

    Science.gov (United States)

    Han, Zhu; Tong, Yifei; Luan, Jiahui; Xiangdong, Li

    2018-05-01

    This paper briefly described the structure of double beam bridge crane and the basic parameters of double beam bridge crane are defined. According to the structure and system division of double beam bridge crane, the reliability architecture of double beam bridge crane system is proposed, and the reliability mathematical model is constructed.

  10. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... and uncertainties are quantified. Further, estimation of annual failure probability for structural components taking into account possible faults in electrical or mechanical systems is considered. For a representative structural failure mode, a probabilistic model is developed that incorporates grid loss failures...

  11. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, we use an IFR distribution to develop a reliability model for the EBS

  12. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, an IFR distribution is used to develop a reliability model for the EBS

  13. Towards a reliable animal model of migraine

    DEFF Research Database (Denmark)

    Olesen, Jes; Jansen-Olesen, Inger

    2012-01-01

    The pharmaceutical industry shows a decreasing interest in the development of drugs for migraine. One of the reasons for this could be the lack of reliable animal models for studying the effect of acute and prophylactic migraine drugs. The infusion of glyceryl trinitrate (GTN) is the best validated...... and most studied human migraine model. Several attempts have been made to transfer this model to animals. The different variants of this model are discussed as well as other recent models....

  14. Space Vehicle Reliability Modeling in DIORAMA

    Energy Technology Data Exchange (ETDEWEB)

    Tornga, Shawn Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-12

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  15. Coarse-grained models using local-density potentials optimized with the relative entropy: Application to implicit solvation

    International Nuclear Information System (INIS)

    Sanyal, Tanmoy; Shell, M. Scott

    2016-01-01

    Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one at which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.

  16. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model

    Science.gov (United States)

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M.; Phifer, Jeremy R.; Paluch, Andrew S.

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of 2.2± 0.2 log units (ranking 15 out of 62 entries), the correlation coefficient ( R) was 0.6± 0.1 (ranking 35), and 72± 6 % of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  17. Solvation of o-hydroxybenzoic acid in pure and modified supercritical carbon dioxide, according to numerical modeling data

    Science.gov (United States)

    Antipova, M. L.; Gurina, D. L.; Odintsova, E. G.; Petrenko, V. E.

    2015-08-01

    The dissolution of an elementary fragment of crystal structure (an o-hydroxybenzoic acid ( o-HBA) dimer) in both pure and modified supercritical (SC) carbon dioxide by adding methanol (molar fraction, 0.035) at T = 318 K, ρ = 0.7 g/cm3 is simulated. Features of the solvation mechanism in each solvent are revealed. The solvation of o-HBA in pure SC CO2 is shown to occur via electron donor-acceptor interactions. o-HBA forms a solvate complex in modified SC CO2 through hydrogen bonds between the carboxyl group and methanol. The hydroxyl group of o-HBA participates in the formation of an intramolecular hydrogen bond, and not in interactions with the solvent. It is concluded that the o-HBA-methanol complex is a stable molecular structure, and its lifetime is one order of magnitude higher than those of other hydrogen bonds in fluids.

  18. Overcoming some limitations of imprecise reliability models

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2011-01-01

    The application of imprecise reliability models is often hindered by the rapid growth in imprecision that occurs when many components constitute a system and by the fact that time to failure is bounded from above. The latter results in the necessity to explicitly introduce an upper bound on time ...

  19. Reliable RANSAC Using a Novel Preprocessing Model

    Directory of Open Access Journals (Sweden)

    Xiaoyan Wang

    2013-01-01

    Full Text Available Geometric assumption and verification with RANSAC has become a crucial step for corresponding to local features due to its wide applications in biomedical feature analysis and vision computing. However, conventional RANSAC is very time-consuming due to redundant sampling times, especially dealing with the case of numerous matching pairs. This paper presents a novel preprocessing model to explore a reduced set with reliable correspondences from initial matching dataset. Both geometric model generation and verification are carried out on this reduced set, which leads to considerable speedups. Afterwards, this paper proposes a reliable RANSAC framework using preprocessing model, which was implemented and verified using Harris and SIFT features, respectively. Compared with traditional RANSAC, experimental results show that our method is more efficient.

  20. Surface Protonation at the Rutile (110) Interface: Explicit Incorporation of Solvation Structure within the Refined MUSIC Model Framework

    Energy Technology Data Exchange (ETDEWEB)

    Machesky, Michael L. [Illinois State Water Survey, Champaign, IL; Predota, M. [University of South Bohemia, Czech Republic; Wesolowski, David J [ORNL

    2008-01-01

    The detailed solvation structure at the (110) surface of rutile ({alpha}-TiO{sub 2}) in contact with bulk liquid water has been obtained primarily from experimentally verified classical molecular dynamics (CMD) simulations of the ab initio-optimized surface in contact with SPC/E water. The results are used to explicitly quantify H-bonding interactions, which are then used within the refined MUSIC model framework to predict surface oxygen protonation constants. Quantum mechanical molecular dynamics (QMD) simulations in the presence of freely dissociable water molecules produced H-bond distributions around deprotonated surface oxygens very similar to those obtained by CMD with nondissociable SPC/E water, thereby confirming that the less computationally intensive CMD simulations provide accurate H-bond information. Utilizing this H-bond information within the refined MUSIC model, along with manually adjusted Ti-O surface bond lengths that are nonetheless within 0.05 {angstrom} of those obtained from static density functional theory (DFT) calculations and measured in X-ray reflectivity experiments (as well as bulk crystal values), give surface protonation constants that result in a calculated zero net proton charge pH value (pHznpc) at 25 C that agrees quantitatively with the experimentally determined value (5.4 {+-} 0.2) for a specific rutile powder dominated by the (110) crystal face. Moreover, the predicted pH{sub znpc} values agree to within 0.1 pH unit with those measured at all temperatures between 10 and 250 C. A slightly smaller manual adjustment of the DFT-derived Ti-O surface bond lengths was sufficient to bring the predicted pH{sub znpc} value of the rutile (110) surface at 25 C into quantitative agreement with the experimental value (4.8 {+-} 0.3) obtained from a polished and annealed rutile (110) single crystal surface in contact with dilute sodium nitrate solutions using second harmonic generation (SHG) intensity measurements as a function of ionic

  1. Surface Protonation at the Rutile (110) Interface: Explicit Incorporation of Solvation Structure within the Refined MUSIC Model Framework

    International Nuclear Information System (INIS)

    Machesky, Michael L.; Predota, M.; Wesolowski, David J.

    2008-01-01

    The detailed solvation structure at the (110) surface of rutile (α-TiO 2 ) in contact with bulk liquid water has been obtained primarily from experimentally verified classical molecular dynamics (CMD) simulations of the ab initio-optimized surface in contact with SPC/E water. The results are used to explicitly quantify H-bonding interactions, which are then used within the refined MUSIC model framework to predict surface oxygen protonation constants. Quantum mechanical molecular dynamics (QMD) simulations in the presence of freely dissociable water molecules produced H-bond distributions around deprotonated surface oxygens very similar to those obtained by CMD with nondissociable SPC/E water, thereby confirming that the less computationally intensive CMD simulations provide accurate H-bond information. Utilizing this H-bond information within the refined MUSIC model, along with manually adjusted Ti-O surface bond lengths that are nonetheless within 0.05 (angstrom) of those obtained from static density functional theory (DFT) calculations and measured in X-ray reflectivity experiments (as well as bulk crystal values), give surface protonation constants that result in a calculated zero net proton charge pH value (pHznpc) at 25 C that agrees quantitatively with the experimentally determined value (5.4 ± 0.2) for a specific rutile powder dominated by the (110) crystal face. Moreover, the predicted pH znpc values agree to within 0.1 pH unit with those measured at all temperatures between 10 and 250 C. A slightly smaller manual adjustment of the DFT-derived Ti-O surface bond lengths was sufficient to bring the predicted pH znpc value of the rutile (110) surface at 25 C into quantitative agreement with the experimental value (4.8 ± 0.3) obtained from a polished and annealed rutile (110) single crystal surface in contact with dilute sodium nitrate solutions using second harmonic generation (SHG) intensity measurements as a function of ionic strength. Additionally, the H

  2. Modelling the Preferential Solvation of Ferulic Acid in {2-Propanol (1 + Water (2} Mixtures at 298.15 K

    Directory of Open Access Journals (Sweden)

    Abolghasem Jouyban 1,2, Fleming Martínez 3 *

    2017-12-01

    Full Text Available Background: Recently Haq et al. reported the equilibrium solubility in {2-propanol (1 + water (2} mixtures at several temperatures with some numerical correlation analysis. Nevertheless, no attempt was made to evaluate the preferential solvation of this compound by the solvents. Methods: Preferential solvation of ferulic acid in the saturated mixtures at 298.15 K was analyzed based on the inverse Kirkwood-Buff integrals as described in the literature. Results: Ferulic acid is preferentially solvated by water in water-rich mixtures (0.00 < x1 < 0.19 but preferentially solvated by 2-propanol in mixtures with composition 0.19 < x1 < 1.00. Conclusion: These results could be interpreted as a consequence of hydrophobic hydration around the non-polar groups of the solute in the former case (0.00 < x1 < 0.19. Moreover, in the last case (0.19 < x1 < 1.00, the observed trend could be a consequence of the acid behavior of ferulic acid in front to 2-propanol molecules because this cosolvent is more basic than water as described by the respective solvatochromic parameters.

  3. Centralized Bayesian reliability modelling with sensor networks

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 19, č. 5 (2013), s. 471-482 ISSN 1387-3954 R&D Projects: GA MŠk 7D12004 Grant - others:GA MŠk(CZ) SVV-265315 Keywords : Bayesian modelling * Sensor network * Reliability Subject RIV: BD - Theory of Information Impact factor: 0.984, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0392551.pdf

  4. Stochastic models in reliability and maintenance

    CERN Document Server

    2002-01-01

    Our daily lives can be maintained by the high-technology systems. Computer systems are typical examples of such systems. We can enjoy our modern lives by using many computer systems. Much more importantly, we have to maintain such systems without failure, but cannot predict when such systems will fail and how to fix such systems without delay. A stochastic process is a set of outcomes of a random experiment indexed by time, and is one of the key tools needed to analyze the future behavior quantitatively. Reliability and maintainability technologies are of great interest and importance to the maintenance of such systems. Many mathematical models have been and will be proposed to describe reliability and maintainability systems by using the stochastic processes. The theme of this book is "Stochastic Models in Reliability and Main­ tainability. " This book consists of 12 chapters on the theme above from the different viewpoints of stochastic modeling. Chapter 1 is devoted to "Renewal Processes," under which cla...

  5. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  6. Bayesian methodology for reliability model acceptance

    International Nuclear Information System (INIS)

    Zhang Ruoxue; Mahadevan, Sankaran

    2003-01-01

    This paper develops a methodology to assess the reliability computation model validity using the concept of Bayesian hypothesis testing, by comparing the model prediction and experimental observation, when there is only one computational model available to evaluate system behavior. Time-independent and time-dependent problems are investigated, with consideration of both cases: with and without statistical uncertainty in the model. The case of time-independent failure probability prediction with no statistical uncertainty is a straightforward application of Bayesian hypothesis testing. However, for the life prediction (time-dependent reliability) problem, a new methodology is developed in this paper to make the same Bayesian hypothesis testing concept applicable. With the existence of statistical uncertainty in the model, in addition to the application of a predictor estimator of the Bayes factor, the uncertainty in the Bayes factor is explicitly quantified through treating it as a random variable and calculating the probability that it exceeds a specified value. The developed method provides a rational criterion to decision-makers for the acceptance or rejection of the computational model

  7. Drying process optimization for an API solvate using heat transfer model of an agitated filter dryer.

    Science.gov (United States)

    Nere, Nandkishor K; Allen, Kimberley C; Marek, James C; Bordawekar, Shailendra V

    2012-10-01

    Drying an early stage active pharmaceutical ingredient candidate required excessively long cycle times in a pilot plant agitated filter dryer. The key to faster drying is to ensure sufficient heat transfer and minimize mass transfer limitations. Designing the right mixing protocol is of utmost importance to achieve efficient heat transfer. To this order, a composite model was developed for the removal of bound solvent that incorporates models for heat transfer and desolvation kinetics. The proposed heat transfer model differs from previously reported models in two respects: it accounts for the effects of a gas gap between the vessel wall and solids on the overall heat transfer coefficient, and headspace pressure on the mean free path length of the inert gas and thereby on the heat transfer between the vessel wall and the first layer of solids. A computational methodology was developed incorporating the effects of mixing and headspace pressure to simulate the drying profile using a modified model framework within the Dynochem software. A dryer operational protocol was designed based on the desolvation kinetics, thermal stability studies of wet and dry cake, and the understanding gained through model simulations, resulting in a multifold reduction in drying time. Copyright © 2012 Wiley-Liss, Inc.

  8. Data Used in Quantified Reliability Models

    Science.gov (United States)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  9. Solvation in supercritical water

    International Nuclear Information System (INIS)

    Cochran, H.D.; Cummings, P.T.; Karaborni, S.

    1991-01-01

    The aim of this work is to determine the solvation structure in supercritical water composed with that in ambient water and in simple supercritical solvents. Molecular dynamics studies have been undertaken of systems that model ionic sodium and chloride, atomic argon, and molecular methanol in supercritical aqueous solutions using the simple point charge model of Berendsen for water. Because of the strong interactions between water and ions, ionic solutes are strongly attractive in supercritical water, forming large clusters of water molecules around each ion. Methanol is found to be a weakly-attractive solute in supercritical water. The cluster of excess water molecules surrounding a dissolved ion or polar molecule in supercritical aqueous solutions is comparable to the solvent clusters surrounding attractive solutes in simple supercritical fluids. Likewise, the deficit of water molecules surrounding a dissolved argon atom in supercritical aqueous solutions is comparable to that surrounding repulsive solutes in simple supercritical fluids. The number of hydrogen bonds per water molecule in supercritical water was found to be about one third the number in ambient water. The number of hydrogen bonds per water molecule surrounding a central particle in supercritical water was only mildly affected by the identify of the central particle--atom, molecule, or ion. These results should be helpful in developing a qualitative understanding of important processes that occur in supercritical water. 29 refs., 6 figs

  10. Communication: modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions.

    Science.gov (United States)

    Bardhan, Jaydeep P; Knepley, Matthew G

    2014-10-07

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley "bracelet" and "rod" test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, "Charge asymmetries in hydration of polar solutes," J. Phys. Chem. B 112, 2405-2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry.

  11. Communication: Modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions

    International Nuclear Information System (INIS)

    Bardhan, Jaydeep P.; Knepley, Matthew G.

    2014-01-01

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys. Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry

  12. Communication: Modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions

    Science.gov (United States)

    Bardhan, Jaydeep P.; Knepley, Matthew G.

    2014-01-01

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys. Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry. PMID:25296776

  13. Communication: Modeling charge-sign asymmetric solvation free energies with nonlinear boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Bardhan, Jaydeep P. [Department of Mechanical and Industrial Engineering, Northeastern University, Boston, Massachusetts 02115 (United States); Knepley, Matthew G. [Computation Institute, The University of Chicago, Chicago, Illinois 60637 (United States)

    2014-10-07

    We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys. Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry.

  14. Acidity in DMSO from the embedded cluster integral equation quantum solvation model.

    Science.gov (United States)

    Heil, Jochen; Tomazic, Daniel; Egbers, Simon; Kast, Stefan M

    2014-04-01

    The embedded cluster reference interaction site model (EC-RISM) is applied to the prediction of acidity constants of organic molecules in dimethyl sulfoxide (DMSO) solution. EC-RISM is based on a self-consistent treatment of the solute's electronic structure and the solvent's structure by coupling quantum-chemical calculations with three-dimensional (3D) RISM integral equation theory. We compare available DMSO force fields with reference calculations obtained using the polarizable continuum model (PCM). The results are evaluated statistically using two different approaches to eliminating the proton contribution: a linear regression model and an analysis of pK(a) shifts for compound pairs. Suitable levels of theory for the integral equation methodology are benchmarked. The results are further analyzed and illustrated by visualizing solvent site distribution functions and comparing them with an aqueous environment.

  15. Biomolecular electrostatics and solvation: a computational perspective.

    Science.gov (United States)

    Ren, Pengyu; Chun, Jaehun; Thomas, Dennis G; Schnieders, Michael J; Marucho, Marcelo; Zhang, Jiajing; Baker, Nathan A

    2012-11-01

    An understanding of molecular interactions is essential for insight into biological systems at the molecular scale. Among the various components of molecular interactions, electrostatics are of special importance because of their long-range nature and their influence on polar or charged molecules, including water, aqueous ions, proteins, nucleic acids, carbohydrates, and membrane lipids. In particular, robust models of electrostatic interactions are essential for understanding the solvation properties of biomolecules and the effects of solvation upon biomolecular folding, binding, enzyme catalysis, and dynamics. Electrostatics, therefore, are of central importance to understanding biomolecular structure and modeling interactions within and among biological molecules. This review discusses the solvation of biomolecules with a computational biophysics view toward describing the phenomenon. While our main focus lies on the computational aspect of the models, we provide an overview of the basic elements of biomolecular solvation (e.g. solvent structure, polarization, ion binding, and non-polar behavior) in order to provide a background to understand the different types of solvation models.

  16. Efficient molecular mechanics simulations of the folding, orientation, and assembly of peptides in lipid bilayers using an implicit atomic solvation model

    Science.gov (United States)

    Bordner, Andrew J.; Zorman, Barry; Abagyan, Ruben

    2011-10-01

    Membrane proteins comprise a significant fraction of the proteomes of sequenced organisms and are the targets of approximately half of marketed drugs. However, in spite of their prevalence and biomedical importance, relatively few experimental structures are available due to technical challenges. Computational simulations can potentially address this deficit by providing structural models of membrane proteins. Solvation within the spatially heterogeneous membrane/solvent environment provides a major component of the energetics driving protein folding and association within the membrane. We have developed an implicit solvation model for membranes that is both computationally efficient and accurate enough to enable molecular mechanics predictions for the folding and association of peptides within the membrane. We derived the new atomic solvation model parameters using an unbiased fitting procedure to experimental data and have applied it to diverse problems in order to test its accuracy and to gain insight into membrane protein folding. First, we predicted the positions and orientations of peptides and complexes within the lipid bilayer and compared the simulation results with solid-state NMR structures. Additionally, we performed folding simulations for a series of host-guest peptides with varying propensities to form alpha helices in a hydrophobic environment and compared the structures with experimental measurements. We were also able to successfully predict the structures of amphipathic peptides as well as the structures for dimeric complexes of short hexapeptides that have experimentally characterized propensities to form beta sheets within the membrane. Finally, we compared calculated relative transfer energies with data from experiments measuring the effects of mutations on the free energies of translocon-mediated insertion of proteins into lipid bilayers and of combined folding and membrane insertion of a beta barrel protein.

  17. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation.

    Science.gov (United States)

    Bardhan, Jaydeep P; Knepley, Matthew G; Anitescu, Mihai

    2009-03-14

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  18. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation.

    Energy Technology Data Exchange (ETDEWEB)

    Bardhan, J. P.; Knepley, M. G.; Anitescu, M. (Biosciences Division); ( MCS); (Rush Univ.)

    2009-03-01

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  19. Aqueous Solvation of Polyalanine α-Helices with Specific Water Molecules and with the CPCM and SM5.2 Aqueous Continuum Models using Density Functional Theory

    OpenAIRE

    Marianski, Mateusz; Dannenberg, J. J.

    2012-01-01

    We present density functional theory (DFT) calculations at the X3LYP/D95(d,p) level on the solvation of polyalanine α-helices in water. The study includes the effects of discrete water molecules and the CPCM and AMSOL SM5.2 solvent continuum model both separately and in combination. We find that individual water molecules cooperatively hydrogen-bond to both the C- and N-termini of the helix, which results in increases in the dipole moment of the helix/water complex to more than the vector sum...

  20. Reference interaction site model with hydrophobicity induced density inhomogeneity: An analytical theory to compute solvation properties of large hydrophobic solutes in the mixture of polyatomic solvent molecules

    International Nuclear Information System (INIS)

    Cao, Siqin; Sheong, Fu Kit; Huang, Xuhui

    2015-01-01

    Reference interaction site model (RISM) has recently become a popular approach in the study of thermodynamical and structural properties of the solvent around macromolecules. On the other hand, it was widely suggested that there exists water density depletion around large hydrophobic solutes (>1 nm), and this may pose a great challenge to the RISM theory. In this paper, we develop a new analytical theory, the Reference Interaction Site Model with Hydrophobicity induced density Inhomogeneity (RISM-HI), to compute solvent radial distribution function (RDF) around large hydrophobic solute in water as well as its mixture with other polyatomic organic solvents. To achieve this, we have explicitly considered the density inhomogeneity at the solute-solvent interface using the framework of the Yvon-Born-Green hierarchy, and the RISM theory is used to obtain the solute-solvent pair correlation. In order to efficiently solve the relevant equations while maintaining reasonable accuracy, we have also developed a new closure called the D2 closure. With this new theory, the solvent RDFs around a large hydrophobic particle in water and different water-acetonitrile mixtures could be computed, which agree well with the results of the molecular dynamics simulations. Furthermore, we show that our RISM-HI theory can also efficiently compute the solvation free energy of solute with a wide range of hydrophobicity in various water-acetonitrile solvent mixtures with a reasonable accuracy. We anticipate that our theory could be widely applied to compute the thermodynamic and structural properties for the solvation of hydrophobic solute

  1. Incorporating Born solvation energy into the three-dimensional Poisson-Nernst-Planck model to study ion selectivity in KcsA K+ channels

    Science.gov (United States)

    Liu, Xuejiao; Lu, Benzhuo

    2017-12-01

    Potassium channels are much more permeable to potassium than sodium ions, although potassium ions are larger and both carry the same positive charge. This puzzle cannot be solved based on the traditional Poisson-Nernst-Planck (PNP) theory of electrodiffusion because the PNP model treats all ions as point charges, does not incorporate ion size information, and therefore cannot discriminate potassium from sodium ions. The PNP model can qualitatively capture some macroscopic properties of certain channel systems such as current-voltage characteristics, conductance rectification, and inverse membrane potential. However, the traditional PNP model is a continuum mean-field model and has no or underestimates the discrete ion effects, in particular the ion solvation or self-energy (which can be described by Born model). It is known that the dehydration effect (closely related to ion size) is crucial to selective permeation in potassium channels. Therefore, we incorporated Born solvation energy into the PNP model to account for ion hydration and dehydration effects when passing through inhomogeneous dielectric channel environments. A variational approach was adopted to derive a Born-energy-modified PNP (BPNP) model. The model was applied to study a cylindrical nanopore and a realistic KcsA channel, and three-dimensional finite element simulations were performed. The BPNP model can distinguish different ion species by ion radius and predict selectivity for K+ over Na+ in KcsA channels. Furthermore, ion current rectification in the KcsA channel was observed by both the PNP and BPNP models. The I -V curve of the BPNP model for the KcsA channel indicated an inward rectifier effect for K+ (rectification ratio of ˜3 /2 ) but indicated an outward rectifier effect for Na+ (rectification ratio of ˜1 /6 ) .

  2. Revealing the Solvation Structure and Dynamics of Carbonate Electrolytes in Lithium-Ion Batteries by Two-Dimensional Infrared Spectrum Modeling.

    Science.gov (United States)

    Liang, Chungwen; Kwak, Kyungwon; Cho, Minhaeng

    2017-12-07

    Carbonate electrolytes in lithium-ion batteries play a crucial role in conducting lithium ions between two electrodes. Mixed solvent electrolytes consisting of linear and cyclic carbonates are commonly used in commercial lithium-ion batteries. To understand how the linear and cyclic carbonates introduce different solvation structures and dynamics, we performed molecular dynamics simulations of two representative electrolyte systems containing either linear or cyclic carbonate solvents. We then modeled their two-dimensional infrared (2DIR) spectra of the carbonyl stretching mode of these carbonate molecules. We found that the chemical exchange process involving formation and dissociation of lithium-ion/carbonate complexes is responsible for the growth of 2DIR cross peaks with increasing waiting time. In addition, we also found that cyclic carbonates introduce faster dynamics of dissociation and formation of lithium-ion/carbonate complexes than linear carbonates. These findings provide new insights into understanding the lithium-ion mobility and its interplay with solvation structure and ultrafast dynamics in carbonate electrolytes used in lithium-ion batteries.

  3. Modeling human reliability analysis using MIDAS

    International Nuclear Information System (INIS)

    Boring, R. L.

    2006-01-01

    This paper documents current efforts to infuse human reliability analysis (HRA) into human performance simulation. The Idaho National Laboratory is teamed with NASA Ames Research Center to bridge the SPAR-H HRA method with NASA's Man-machine Integration Design and Analysis System (MIDAS) for use in simulating and modeling the human contribution to risk in nuclear power plant control room operations. It is anticipated that the union of MIDAS and SPAR-H will pave the path for cost-effective, timely, and valid simulated control room operators for studying current and next generation control room configurations. This paper highlights considerations for creating the dynamic HRA framework necessary for simulation, including event dependency and granularity. This paper also highlights how the SPAR-H performance shaping factors can be modeled in MIDAS across static, dynamic, and initiator conditions common to control room scenarios. This paper concludes with a discussion of the relationship of the workload factors currently in MIDAS and the performance shaping factors in SPAR-H. (authors)

  4. [Experimental and computation studies of polar solvation

    International Nuclear Information System (INIS)

    1990-01-01

    This report from the Pennsylvania State University contains seven sections: (1) radiative rate effects in solvatlvatochromic probes; (2) intramolecular charge transfer reactions; (3) Solvation dynamics in low temperature alcohols; (4) Ionic solvation dynamics; (5) solvation and proton-transfer dynamics in 7-azaindole; (6) computer simulations of solvation dynamics; (7) solvation in supercritical fluids. 20 refs., 11 figs

  5. Solvated protein-DNA docking using HADDOCK

    NARCIS (Netherlands)

    van Dijk, Marc; Visscher, Koen M; Bonvin, Alexandre M.J.J; Kastritis, Panagiotis L.

    2013-01-01

    Interfacial water molecules play an important role in many aspects of protein-DNA specificity and recognition. Yet they have been mostly neglected in the computational modeling of these complexes. We present here a solvated docking protocol that allows explicit inclusion of water molecules in the

  6. Human reliability data collection and modelling

    International Nuclear Information System (INIS)

    1991-09-01

    The main purpose of this document is to review and outline the current state-of-the-art of the Human Reliability Assessment (HRA) used for quantitative assessment of nuclear power plants safe and economical operation. Another objective is to consider Human Performance Indicators (HPI) which can alert plant manager and regulator to departures from states of normal and acceptable operation. These two objectives are met in the three sections of this report. The first objective has been divided into two areas, based on the location of the human actions being considered. That is, the modelling and data collection associated with control room actions are addressed first in chapter 1 while actions outside the control room (including maintenance) are addressed in chapter 2. Both chapters 1 and 2 present a brief outline of the current status of HRA for these areas, and major outstanding issues. Chapter 3 discusses HPI. Such performance indicators can signal, at various levels, changes in factors which influence human performance. The final section of this report consists of papers presented by the participants of the Technical Committee Meeting. A separate abstract was prepared for each of these papers. Refs, figs and tabs

  7. System reliability time-dependent models

    International Nuclear Information System (INIS)

    Debernardo, H.D.

    1991-06-01

    A probabilistic methodology for safety system technical specification evaluation was developed. The method for Surveillance Test Interval (S.T.I.) evaluation basically means an optimization of S.T.I. of most important system's periodically tested components. For Allowed Outage Time (A.O.T.) calculations, the method uses system reliability time-dependent models (A computer code called FRANTIC III). A new approximation, which was called Independent Minimal Cut Sets (A.C.I.), to compute system unavailability was also developed. This approximation is better than Rare Event Approximation (A.E.R.) and the extra computing cost is neglectible. A.C.I. was joined to FRANTIC III to replace A.E.R. on future applications. The case study evaluations verified that this methodology provides a useful probabilistic assessment of surveillance test intervals and allowed outage times for many plant components. The studied system is a typical configuration of nuclear power plant safety systems (two of three logic). Because of the good results, these procedures will be used by the Argentine nuclear regulatory authorities in evaluation of technical specification of Atucha I and Embalse nuclear power plant safety systems. (Author) [es

  8. Solvated protein-DNA docking using HADDOCK

    Energy Technology Data Exchange (ETDEWEB)

    Dijk, Marc van; Visscher, Koen M.; Kastritis, Panagiotis L.; Bonvin, Alexandre M. J. J., E-mail: a.m.j.j.bonvin@uu.nl [Utrecht University, Bijvoet Center for Biomolecular Research, Faculty of Science-Chemistry (Netherlands)

    2013-05-15

    Interfacial water molecules play an important role in many aspects of protein-DNA specificity and recognition. Yet they have been mostly neglected in the computational modeling of these complexes. We present here a solvated docking protocol that allows explicit inclusion of water molecules in the docking of protein-DNA complexes and demonstrate its feasibility on a benchmark of 30 high-resolution protein-DNA complexes containing crystallographically-determined water molecules at their interfaces. Our protocol is capable of reproducing the solvation pattern at the interface and recovers hydrogen-bonded water-mediated contacts in many of the benchmark cases. Solvated docking leads to an overall improvement in the quality of the generated protein-DNA models for cases with limited conformational change of the partners upon complex formation. The applicability of this approach is demonstrated on real cases by docking a representative set of 6 complexes using unbound protein coordinates, model-built DNA and knowledge-based restraints. As HADDOCK supports the inclusion of a variety of NMR restraints, solvated docking is also applicable for NMR-based structure calculations of protein-DNA complexes.

  9. Solvated protein–DNA docking using HADDOCK

    International Nuclear Information System (INIS)

    Dijk, Marc van; Visscher, Koen M.; Kastritis, Panagiotis L.; Bonvin, Alexandre M. J. J.

    2013-01-01

    Interfacial water molecules play an important role in many aspects of protein–DNA specificity and recognition. Yet they have been mostly neglected in the computational modeling of these complexes. We present here a solvated docking protocol that allows explicit inclusion of water molecules in the docking of protein–DNA complexes and demonstrate its feasibility on a benchmark of 30 high-resolution protein–DNA complexes containing crystallographically-determined water molecules at their interfaces. Our protocol is capable of reproducing the solvation pattern at the interface and recovers hydrogen-bonded water-mediated contacts in many of the benchmark cases. Solvated docking leads to an overall improvement in the quality of the generated protein–DNA models for cases with limited conformational change of the partners upon complex formation. The applicability of this approach is demonstrated on real cases by docking a representative set of 6 complexes using unbound protein coordinates, model-built DNA and knowledge-based restraints. As HADDOCK supports the inclusion of a variety of NMR restraints, solvated docking is also applicable for NMR-based structure calculations of protein–DNA complexes.

  10. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  11. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    OpenAIRE

    Hai An; Ling Zhou; Hui Sun

    2016-01-01

    Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new...

  12. Building and integrating reliability models in a Reliability-Centered-Maintenance approach

    International Nuclear Information System (INIS)

    Verite, B.; Villain, B.; Venturini, V.; Hugonnard, S.; Bryla, P.

    1998-03-01

    Electricite de France (EDF) has recently developed its OMF-Structures method, designed to optimize preventive maintenance of passive structures such as pipes and support, based on risk. In particular, reliability performances of components need to be determined; it is a two-step process, consisting of a qualitative sort followed by a quantitative evaluation, involving two types of models. Initially, degradation models are widely used to exclude some components from the field of preventive maintenance. The reliability of the remaining components is then evaluated by means of quantitative reliability models. The results are then included in a risk indicator that is used to directly optimize preventive maintenance tasks. (author)

  13. Reliability Model of Power Transformer with ONAN Cooling

    OpenAIRE

    M. Sefidgaran; M. Mirzaie; A. Ebrahimzadeh

    2010-01-01

    Reliability of a power system is considerably influenced by its equipments. Power transformers are one of the most critical and expensive equipments of a power system and their proper functions are vital for the substations and utilities. Therefore, reliability model of power transformer is very important in the risk assessment of the engineering systems. This model shows the characteristics and functions of a transformer in the power system. In this paper the reliability model...

  14. Interfacial solvation thermodynamics

    International Nuclear Information System (INIS)

    Ben-Amotz, Dor

    2016-01-01

    Previous studies have reached conflicting conclusions regarding the interplay of cavity formation, polarizability, desolvation, and surface capillary waves in driving the interfacial adsorptions of ions and molecules at air–water interfaces. Here we revisit these questions by combining exact potential distribution results with linear response theory and other physically motivated approximations. The results highlight both exact and approximate compensation relations pertaining to direct (solute–solvent) and indirect (solvent–solvent) contributions to adsorption thermodynamics, of relevance to solvation at air–water interfaces, as well as a broader class of processes linked to the mean force potential between ions, molecules, nanoparticles, proteins, and biological assemblies. (paper)

  15. Conditional solvation thermodynamics of isoleucine in model peptides and the limitations of the group-transfer model.

    Science.gov (United States)

    Tomar, Dheeraj S; Weber, Valéry; Pettitt, B Montgomery; Asthagiri, D

    2014-04-17

    The hydration thermodynamics of the amino acid X relative to the reference G (glycine) or the hydration thermodynamics of a small-molecule analog of the side chain of X is often used to model the contribution of X to protein stability and solution thermodynamics. We consider the reasons for successes and limitations of this approach by calculating and comparing the conditional excess free energy, enthalpy, and entropy of hydration of the isoleucine side chain in zwitterionic isoleucine, in extended penta-peptides, and in helical deca-peptides. Butane in gauche conformation serves as a small-molecule analog for the isoleucine side chain. Parsing the hydrophobic and hydrophilic contributions to hydration for the side chain shows that both of these aspects of hydration are context-sensitive. Furthermore, analyzing the solute-solvent interaction contribution to the conditional excess enthalpy of the side chain shows that what is nominally considered a property of the side chain includes entirely nonobvious contributions of the background. The context-sensitivity of hydrophobic and hydrophilic hydration and the conflation of background contributions with energetics attributed to the side chain limit the ability of a single scaling factor, such as the fractional solvent exposure of the group in the protein, to map the component energetic contributions of the model-compound data to their value in the protein. But ignoring the origin of cancellations in the underlying components the group-transfer model may appear to provide a reasonable estimate of the free energy for a given error tolerance.

  16. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  17. Models on reliability of non-destructive testing

    International Nuclear Information System (INIS)

    Simola, K.; Pulkkinen, U.

    1998-01-01

    The reliability of ultrasonic inspections has been studied in e.g. international PISC (Programme for the Inspection of Steel Components) exercises. These exercises have produced a large amount of information on the effect of various factors on the reliability of inspections. The information obtained from reliability experiments are used to model the dependency of flaw detection probability on various factors and to evaluate the performance of inspection equipment, including the sizing accuracy. The information from experiments is utilised in a most effective way when mathematical models are applied. Here, some statistical models for reliability of non-destructive tests are introduced. In order to demonstrate the use of inspection reliability models, they have been applied to the inspection results of intergranular stress corrosion cracking (IGSCC) type flaws in PISC III exercise (PISC 1995). The models are applied to both flaw detection frequency data of all inspection teams and to flaw sizing data of one participating team. (author)

  18. Reliability modelling and simulation of switched linear system ...

    African Journals Online (AJOL)

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  19. A possibilistic uncertainty model in classical reliability theory

    International Nuclear Information System (INIS)

    De Cooman, G.; Capelle, B.

    1994-01-01

    The authors argue that a possibilistic uncertainty model can be used to represent linguistic uncertainty about the states of a system and of its components. Furthermore, the basic properties of the application of this model to classical reliability theory are studied. The notion of the possibilistic reliability of a system or a component is defined. Based on the concept of a binary structure function, the important notion of a possibilistic function is introduced. It allows to calculate the possibilistic reliability of a system in terms of the possibilistic reliabilities of its components

  20. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  1. Relating pressure tuned coupled column ensembles with the solvation parameter model for tunable selectivity in gas chromatography.

    Science.gov (United States)

    Sharif, Khan M; Kulsing, Chadin; Chin, Sung-Tong; Marriott, Philip J

    2016-07-15

    The differential pressure drop of carrier gas by tuning the junction point pressure of a coupled column gas chromatographic system leads to a unique selectivity of the overall separation, which can be tested using a mixture of compounds with a wide range of polarity. This study demonstrates a pressure tuning (PT) GC system employing a microfluidic Deans switch located at the mid-point of the two capillary columns. This PT system allowed variations of inlet-outlet pressure differences of the two columns in a range of 52-17psi for the upstream column and 31-11psi for the downstream column. Peak shifting (differential migration) of compounds due to PT difference are related to a first order regression equation in a Plackett-Burman factorial study. Increased first (upstream) column pressure drop makes the second column characteristics more significant in the coupled column retention behavior, and conversely increased second (downstream) column pressure drop makes the first column characteristics more apparent; such variation can result in component swapping between polar and non-polar compounds. The coupled column system selectivity was evaluated in terms of linear solvation energy relationship (LSER) parameters, and their relation with different pressure drop effects has been constructed by applying multivariate principle component analysis (PCA). It has been found that the coupled column PT system descriptors provide a result that shows a clear clustering of different pressure settings, somewhat intermediate between those of the two commercial columns. This is equivalent to that obtained from a conventional single-column GC analysis where the interaction energy contributed from the stationary phases can be significantly adjusted by choice of midpoint PT. This result provides a foundation for pressure differentiation for selectivity enhancement. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Developing Fast and Reliable Flood Models

    DEFF Research Database (Denmark)

    Thrysøe, Cecilie; Toke, Jens; Borup, Morten

    2016-01-01

    . A surrogate model is set up for a case study area in Aarhus, Denmark, to replace a MIKE FLOOD model. The drainage surrogates are able to reproduce the MIKE URBAN results for a set of rain inputs. The coupled drainage-surface surrogate model lacks details in the surface description which reduces its overall...... accuracy. The model shows no instability, hence larger time steps can be applied, which reduces the computational time by more than a factor 1400. In conclusion, surrogate models show great potential for usage in urban water modelling....

  3. Evaluation of mobile ad hoc network reliability using propagation-based link reliability model

    International Nuclear Information System (INIS)

    Padmavathy, N.; Chaturvedi, Sanjay K.

    2013-01-01

    A wireless mobile ad hoc network (MANET) is a collection of solely independent nodes (that can move randomly around the area of deployment) making the topology highly dynamic; nodes communicate with each other by forming a single hop/multi-hop network and maintain connectivity in decentralized manner. MANET is modelled using geometric random graphs rather than random graphs because the link existence in MANET is a function of the geometric distance between the nodes and the transmission range of the nodes. Among many factors that contribute to the MANET reliability, the reliability of these networks also depends on the robustness of the link between the mobile nodes of the network. Recently, the reliability of such networks has been evaluated for imperfect nodes (transceivers) with binary model of communication links based on the transmission range of the mobile nodes and the distance between them. However, in reality, the probability of successful communication decreases as the signal strength deteriorates due to noise, fading or interference effects even up to the nodes' transmission range. Hence, in this paper, using a propagation-based link reliability model rather than a binary-model with nodes following a known failure distribution to evaluate the network reliability (2TR m , ATR m and AoTR m ) of MANET through Monte Carlo Simulation is proposed. The method is illustrated with an application and some imperative results are also presented

  4. Experiment research on cognition reliability model of nuclear power plant

    International Nuclear Information System (INIS)

    Zhao Bingquan; Fang Xiang

    1999-01-01

    The objective of the paper is to improve the reliability of operation on real nuclear power plant of operators through the simulation research to the cognition reliability of nuclear power plant operators. The research method of the paper is to make use of simulator of nuclear power plant as research platform, to take present international research model of reliability of human cognition based on three-parameter Weibull distribution for reference, to develop and get the research model of Chinese nuclear power plant operators based on two-parameter Weibull distribution. By making use of two-parameter Weibull distribution research model of cognition reliability, the experiments about the cognition reliability of nuclear power plant operators have been done. Compared with the results of other countries such USA and Hungary, the same results can be obtained, which can do good to the safety operation of nuclear power plant

  5. An interval-valued reliability model with bounded failure rates

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2012-01-01

    The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... rate is known possibly along with other reliability measures, precise or imprecise. The Lagrange method is used to solve the constrained optimization problem to derive new reliability measures of interest. The obtained results call for an exponential-wise approximation of failure probability density...

  6. Analytical modeling of nuclear power station operator reliability

    International Nuclear Information System (INIS)

    Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    The operator-plant interface is a critical component of power stations which requires the formulation of mathematical models to be applied in plant reliability analysis. The human model introduced here is based on cybernetic interactions and allows for use of available data from psychological experiments, hot and cold training and normal operation. The operator model is identified and integrated in the control and protection systems. The availability and reliability are given for different segments of the operator task and for specific periods of the operator life: namely, training, operation and vigilance or near retirement periods. The results can be easily and directly incorporated in system reliability analysis. (author)

  7. Reliability modeling of Clinch River breeder reactor electrical shutdown systems

    International Nuclear Information System (INIS)

    Schatz, R.A.; Duetsch, K.L.

    1974-01-01

    The initial simulation of the probabilistic properties of the Clinch River Breeder Reactor Plant (CRBRP) electrical shutdown systems is described. A model of the reliability (and availability) of the systems is presented utilizing Success State and continuous-time, discrete state Markov modeling techniques as significant elements of an overall reliability assessment process capable of demonstrating the achievement of program goals. This model is examined for its sensitivity to safe/unsafe failure rates, sybsystem redundant configurations, test and repair intervals, monitoring by reactor operators; and the control exercised over system reliability by design modifications and the selection of system operating characteristics. (U.S.)

  8. Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier

    We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the c......We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...

  9. Aqueous solvation of polyalanine α-helices with specific water molecules and with the CPCM and SM5.2 aqueous continuum models using density functional theory.

    Science.gov (United States)

    Marianski, Mateusz; Dannenberg, J J

    2012-02-02

    We present density functional theory (DFT) calculations at the X3LYP/D95(d,p) level on the solvation of polyalanine α-helices in water. The study includes the effects of discrete water molecules and the CPCM and AMSOL SM5.2 solvent continuum model both separately and in combination. We find that individual water molecules cooperatively hydrogen-bond to both the C- and N-termini of the helix, which results in increases in the dipole moment of the helix/water complex to more than the vector sum of their individual dipole moments. These waters are found to be more stable than in bulk solvent. On the other hand, individual water molecules that interact with the backbone lower the dipole moment of the helix/water complex to below that of the helix itself. Small clusters of waters at the termini increase the dipole moments of the helix/water aggregates, but the effect diminishes as more waters are added. We discuss the somewhat complex behavior of the helix with the discrete waters in the continuum models.

  10. Models for Battery Reliability and Lifetime

    Energy Technology Data Exchange (ETDEWEB)

    Smith, K.; Wood, E.; Santhanagopalan, S.; Kim, G. H.; Neubauer, J.; Pesaran, A.

    2014-03-01

    Models describing battery degradation physics are needed to more accurately understand how battery usage and next-generation battery designs can be optimized for performance and lifetime. Such lifetime models may also reduce the cost of battery aging experiments and shorten the time required to validate battery lifetime. Models for chemical degradation and mechanical stress are reviewed. Experimental analysis of aging data from a commercial iron-phosphate lithium-ion (Li-ion) cell elucidates the relative importance of several mechanical stress-induced degradation mechanisms.

  11. RELIABILITY MODELING BASED ON INCOMPLETE DATA: OIL PUMP APPLICATION

    Directory of Open Access Journals (Sweden)

    Ahmed HAFAIFA

    2014-07-01

    Full Text Available The reliability analysis for industrial maintenance is now increasingly demanded by the industrialists in the world. Indeed, the modern manufacturing facilities are equipped by data acquisition and monitoring system, these systems generates a large volume of data. These data can be used to infer future decisions affecting the health facilities. These data can be used to infer future decisions affecting the state of the exploited equipment. However, in most practical cases the data used in reliability modelling are incomplete or not reliable. In this context, to analyze the reliability of an oil pump, this work proposes to examine and treat the incomplete, incorrect or aberrant data to the reliability modeling of an oil pump. The objective of this paper is to propose a suitable methodology for replacing the incomplete data using a regression method.

  12. MODELING HUMAN RELIABILITY ANALYSIS USING MIDAS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; Donald D. Dudenhoeffer; Bruce P. Hallbert; Brian F. Gore

    2006-05-01

    This paper summarizes an emerging collaboration between Idaho National Laboratory and NASA Ames Research Center regarding the utilization of high-fidelity MIDAS simulations for modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (i) the estimation of human error with novel control room equipment and configurations, (ii) the investigative determination of risk significance in recreating past event scenarios involving control room operating crews, and (iii) the certification of novel staffing levels in control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of risk in next generation control rooms.

  13. Theory of optical spectra of solvated electrons

    International Nuclear Information System (INIS)

    Kestner, N.R.

    1975-01-01

    During the last few years better theoretical models of solvated electron have been developed. These models allow one to calculate a priori the observable properties of the trapped electron. One of the most important and most widely determined properties is the optical spectrum. In this paper we consider the predictions of the theories not only as to the band maximum but line shape and width. In addition we will review how the theories predict these will depend on the solvent, pressure, temperature, and solvent density. In all cases extensive comparisons will be made with experimental work. In addition four new areas will be explored and recent results will be presented. These concern electrons in dense polar gases, the time development of the solvated electron spectrum, solvated electrons in mixed solvents, and photoelectron emission spectra (PEE) as it relates to higher excited states. This paper will review all recent theoretical calculations and present a critical review of the present status and future developments which are anticipated. The best theories are quite successful in predicting trends, and qualitative agreement concerning band maximum. The theory is still weak in predicting line shape and line width

  14. Plant and control system reliability and risk model

    International Nuclear Information System (INIS)

    Niemelae, I.M.

    1986-01-01

    A new reliability modelling technique for control systems and plants is demonstrated. It is based on modified boolean algebra and it has been automated into an efficient computer code called RELVEC. The code is useful for getting an overall view of the reliability parameters or for an in-depth reliability analysis, which is essential in risk analysis, where the model must be capable of answering to specific questions like: 'What is the probability of this temperature limiter to provide a false alarm', or 'what is the probability of air pressure in this subsystem to drop below lower limit'. (orig./DG)

  15. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  16. An integrated approach to human reliability analysis -- decision analytic dynamic reliability model

    International Nuclear Information System (INIS)

    Holmberg, J.; Hukki, K.; Norros, L.; Pulkkinen, U.; Pyy, P.

    1999-01-01

    The reliability of human operators in process control is sensitive to the context. In many contemporary human reliability analysis (HRA) methods, this is not sufficiently taken into account. The aim of this article is that integration between probabilistic and psychological approaches in human reliability should be attempted. This is achieved first, by adopting such methods that adequately reflect the essential features of the process control activity, and secondly, by carrying out an interactive HRA process. Description of the activity context, probabilistic modeling, and psychological analysis form an iterative interdisciplinary sequence of analysis in which the results of one sub-task maybe input to another. The analysis of the context is carried out first with the help of a common set of conceptual tools. The resulting descriptions of the context promote the probabilistic modeling, through which new results regarding the probabilistic dynamics can be achieved. These can be incorporated in the context descriptions used as reference in the psychological analysis of actual performance. The results also provide new knowledge of the constraints of activity, by providing information of the premises of the operator's actions. Finally, the stochastic marked point process model gives a tool, by which psychological methodology may be interpreted and utilized for reliability analysis

  17. Software reliability growth model for safety systems of nuclear reactor

    International Nuclear Information System (INIS)

    Thirugnana Murthy, D.; Murali, N.; Sridevi, T.; Satya Murty, S.A.V.; Velusamy, K.

    2014-01-01

    The demand for complex software systems has increased more rapidly than the ability to design, implement, test, and maintain them, and the reliability of software systems has become a major concern for our, modern society.Software failures have impaired several high visibility programs in space, telecommunications, defense and health industries. Besides the costs involved, it setback the projects. The ways of quantifying it and using it for improvement and control of the software development and maintenance process. This paper discusses need for systematic approaches for measuring and assuring software reliability which is a major share of project development resources. It covers the reliability models with the concern on 'Reliability Growth'. It includes data collection on reliability, statistical estimation and prediction, metrics and attributes of product architecture, design, software development, and the operational environment. Besides its use for operational decisions like deployment, it includes guiding software architecture, development, testing and verification and validation. (author)

  18. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  19. A Survey of Software Reliability Modeling and Estimation

    Science.gov (United States)

    1983-09-01

    considered include: the Jelinski-Moranda Model, the ,Geometric Model,’ and Musa’s Model. A Monte -Carlo study of the behavior of the ’V"’"*least squares...ceedings Number 261, 1979, pp. 34-1, 34-11. IoelAmrit, AGieboSSukert, Alan and Goel, Ararat , "A Guidebookfor Software Reliability Assessment, 1980

  20. Power plant reliability calculation with Markov chain models

    International Nuclear Information System (INIS)

    Senegacnik, A.; Tuma, M.

    1998-01-01

    In the paper power plant operation is modelled using continuous time Markov chains with discrete state space. The model is used to compute the power plant reliability and the importance and influence of individual states, as well as the transition probabilities between states. For comparison the model is fitted to data for coal and nuclear power plants recorded over several years. (orig.) [de

  1. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  2. Preferential solvation: dividing surface vs excess numbers.

    Science.gov (United States)

    Shimizu, Seishi; Matubayasi, Nobuyuki

    2014-04-10

    How do osmolytes affect the conformation and configuration of supramolecular assembly, such as ion channel opening and actin polymerization? The key to the answer lies in the excess solvation numbers of water and osmolyte molecules; these numbers are determinable solely from experimental data, as guaranteed by the phase rule, as we show through the exact solution theory of Kirkwood and Buff (KB). The osmotic stress technique (OST), in contrast, purposes to yield alternative hydration numbers through the use of the dividing surface borrowed from the adsorption theory. However, we show (i) OST is equivalent, when it becomes exact, to the crowding effect in which the osmolyte exclusion dominates over hydration; (ii) crowding is not the universal driving force of the osmolyte effect (e.g., actin polymerization); (iii) the dividing surface for solvation is useful only for crowding, unlike in the adsorption theory which necessitates its use due to the phase rule. KB thus clarifies the true meaning and limitations of the older perspectives on preferential solvation (such as solvent binding models, crowding, and OST), and enables excess number determination without any further assumptions.

  3. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  4. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  5. Modeling cognition dynamics and its application to human reliability analysis

    International Nuclear Information System (INIS)

    Mosleh, A.; Smidts, C.; Shen, S.H.

    1996-01-01

    For the past two decades, a number of approaches have been proposed for the identification and estimation of the likelihood of human errors, particularly for use in the risk and reliability studies of nuclear power plants. Despite the wide-spread use of the most popular among these methods, their fundamental weaknesses are widely recognized, and the treatment of human reliability has been considered as one of the soft spots of risk studies of large technological systems. To alleviate the situation, new efforts have focused on the development of human reliability models based on a more fundamental understanding of operator response and its cognitive aspects

  6. Reliability model for common mode failures in redundant safety systems

    International Nuclear Information System (INIS)

    Fleming, K.N.

    1974-12-01

    A method is presented for computing the reliability of redundant safety systems, considering both independent and common mode type failures. The model developed for the computation is a simple extension of classical reliability theory. The feasibility of the method is demonstrated with the use of an example. The probability of failure of a typical diesel-generator emergency power system is computed based on data obtained from U. S. diesel-generator operating experience. The results are compared with reliability predictions based on the assumption that all failures are independent. The comparison shows a significant increase in the probability of redundant system failure, when common failure modes are considered. (U.S.)

  7. Standard electrode potential, Tafel equation, and the solvation thermodynamics.

    Science.gov (United States)

    Matyushov, Dmitry V

    2009-06-21

    Equilibrium in the electronic subsystem across the solution-metal interface is considered to connect the standard electrode potential to the statistics of localized electronic states in solution. We argue that a correct derivation of the Nernst equation for the electrode potential requires a careful separation of the relevant time scales. An equation for the standard metal potential is derived linking it to the thermodynamics of solvation. The Anderson-Newns model for electronic delocalization between the solution and the electrode is combined with a bilinear model of solute-solvent coupling introducing nonlinear solvation into the theory of heterogeneous electron transfer. We therefore are capable of addressing the question of how nonlinear solvation affects electrochemical observables. The transfer coefficient of electrode kinetics is shown to be equal to the derivative of the free energy, or generalized force, required to shift the unoccupied electronic level in the bulk. The transfer coefficient thus directly quantifies the extent of nonlinear solvation of the redox couple. The current model allows the transfer coefficient to deviate from the value of 0.5 of the linear solvation models at zero electrode overpotential. The electrode current curves become asymmetric in respect to the change in the sign of the electrode overpotential.

  8. Modeling of system reliability Petri nets with aging tokens

    International Nuclear Information System (INIS)

    Volovoi, V.

    2004-01-01

    The paper addresses the dynamic modeling of degrading and repairable complex systems. Emphasis is placed on the convenience of modeling for the end user, with special attention being paid to the modeling part of a problem, which is considered to be decoupled from the choice of solution algorithms. Depending on the nature of the problem, these solution algorithms can include discrete event simulation or numerical solution of the differential equations that govern underlying stochastic processes. Such modularity allows a focus on the needs of system reliability modeling and tailoring of the modeling formalism accordingly. To this end, several salient features are chosen from the multitude of existing extensions of Petri nets, and a new concept of aging tokens (tokens with memory) is introduced. The resulting framework provides for flexible and transparent graphical modeling with excellent representational power that is particularly suited for system reliability modeling with non-exponentially distributed firing times. The new framework is compared with existing Petri-net approaches and other system reliability modeling techniques such as reliability block diagrams and fault trees. The relative differences are emphasized and illustrated with several examples, including modeling of load sharing, imperfect repair of pooled items, multiphase missions, and damage-tolerant maintenance. Finally, a simple implementation of the framework using discrete event simulation is described

  9. Learning reliable manipulation strategies without initial physical models

    Science.gov (United States)

    Christiansen, Alan D.; Mason, Matthew T.; Mitchell, Tom M.

    1990-01-01

    A description is given of a robot, possessing limited sensory and effectory capabilities but no initial model of the effects of its actions on the world, that acquires such a model through exploration, practice, and observation. By acquiring an increasingly correct model of its actions, it generates increasingly successful plans to achieve its goals. In an apparently nondeterministic world, achieving reliability requires the identification of reliable actions and a preference for using such actions. Furthermore, by selecting its training actions carefully, the robot can significantly improve its learning rate.

  10. Using LISREL to Evaluate Measurement Models and Scale Reliability.

    Science.gov (United States)

    Fleishman, John; Benson, Jeri

    1987-01-01

    LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…

  11. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  12. Reliability model of SNS linac (spallation neutron source-ORNL)

    International Nuclear Information System (INIS)

    Pitigoi, A.; Fernandez, P.

    2015-01-01

    A reliability model of SNS LINAC (Spallation Neutron Source at Oak Ridge National Laboratory) has been developed using risk spectrum reliability analysis software and the analysis of the accelerator system's reliability has been performed. The analysis results have been evaluated by comparing them with the SNS operational data. This paper presents the main results and conclusions focusing on the definition of design weaknesses and provides recommendations to improve reliability of the MYRRHA ( linear accelerator. The reliability results show that the most affected SNS LINAC parts/systems are: 1) SCL (superconducting linac), front-end systems: IS, LEBT (low-energy beam transport line), MEBT (medium-energy beam transport line), diagnostics and controls; 2) RF systems (especially the SCL RF system); 3) power supplies and PS controllers. These results are in line with the records in the SNS logbook. The reliability issue that needs to be enforced in the linac design is the redundancy of the systems, subsystems and components most affected by failures. For compensation purposes, there is a need for intelligent fail-over redundancy implementation in controllers. Enough diagnostics has to be implemented to allow reliable functioning of the redundant solutions and to ensure the compensation function

  13. Order and correlation contributions to the entropy of hydrophobic solvation

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Maoyuan; Besford, Quinn Alexander; Mulvaney, Thomas; Gray-Weale, Angus, E-mail: gusgw@gusgw.net [School of Chemistry, The University of Melbourne, Victoria 3010 (Australia)

    2015-03-21

    The entropy of hydrophobic solvation has been explained as the result of ordered solvation structures, of hydrogen bonds, of the small size of the water molecule, of dispersion forces, and of solvent density fluctuations. We report a new approach to the calculation of the entropy of hydrophobic solvation, along with tests of and comparisons to several other methods. The methods are assessed in the light of the available thermodynamic and spectroscopic information on the effects of temperature on hydrophobic solvation. Five model hydrophobes in SPC/E water give benchmark solvation entropies via Widom’s test-particle insertion method, and other methods and models are tested against these particle-insertion results. Entropies associated with distributions of tetrahedral order, of electric field, and of solvent dipole orientations are examined. We find these contributions are small compared to the benchmark particle-insertion entropy. Competitive with or better than other theories in accuracy, but with no free parameters, is the new estimate of the entropy contributed by correlations between dipole moments. Dipole correlations account for most of the hydrophobic solvation entropy for all models studied and capture the distinctive temperature dependence seen in thermodynamic and spectroscopic experiments. Entropies based on pair and many-body correlations in number density approach the correct magnitudes but fail to describe temperature and size dependences, respectively. Hydrogen-bond definitions and free energies that best reproduce entropies from simulations are reported, but it is difficult to choose one hydrogen bond model that fits a variety of experiments. The use of information theory, scaled-particle theory, and related methods is discussed briefly. Our results provide a test of the Frank-Evans hypothesis that the negative solvation entropy is due to structured water near the solute, complement the spectroscopic detection of that solvation structure by

  14. Models for reliability and management of NDT data

    International Nuclear Information System (INIS)

    Simola, K.

    1997-01-01

    In this paper the reliability of NDT measurements was approached from three directions. We have modelled the flaw sizing performance, the probability of flaw detection, and developed models to update the knowledge of true flaw size based on sequential measurement results and flaw sizing reliability model. In discussed models the measured flaw characteristics (depth, length) are assumed to be simple functions of the true characteristics and random noise corresponding to measurement errors, and the models are based on logarithmic transforms. Models for Bayesian updating of the flaw size distributions were developed. Using these models, it is possible to take into account the prior information of the flaw size and combine it with the measured results. A Bayesian approach could contribute e. g. to the definition of an appropriate combination of practical assessments and technical justifications in NDT system qualifications, as expressed by the European regulatory bodies

  15. Transparent reliability model for fault-tolerant safety systems

    International Nuclear Information System (INIS)

    Bodsberg, Lars; Hokstad, Per

    1997-01-01

    A reliability model is presented which may serve as a tool for identification of cost-effective configurations and operating philosophies of computer-based process safety systems. The main merit of the model is the explicit relationship in the mathematical formulas between failure cause and the means used to improve system reliability such as self-test, redundancy, preventive maintenance and corrective maintenance. A component failure taxonomy has been developed which allows the analyst to treat hardware failures, human failures, and software failures of automatic systems in an integrated manner. Furthermore, the taxonomy distinguishes between failures due to excessive environmental stresses and failures initiated by humans during engineering and operation. Attention has been given to develop a transparent model which provides predictions which are in good agreement with observed system performance, and which is applicable for non-experts in the field of reliability

  16. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  17. Fuse Modeling for Reliability Study of Power Electronic Circuits

    DEFF Research Database (Denmark)

    Bahman, Amir Sajjad; Iannuzzo, Francesco; Blaabjerg, Frede

    2017-01-01

    This paper describes a comprehensive modeling approach on reliability of fuses used in power electronic circuits. When fuses are subjected to current pulses, cyclic temperature stress is introduced to the fuse element and will wear out the component. Furthermore, the fuse may be used in a large......, and rated voltage/current are opposed to shift in time to effect early breaking during the normal operation of the circuit. Therefore, in such cases, a reliable protection required for the other circuit components will not be achieved. The thermo-mechanical models, fatigue analysis and thermo...

  18. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  19. Femtosecond spectroscopic study of the solvation of amphiphilic molecules by water

    NARCIS (Netherlands)

    Rezus, Y.L.A.; Bakker, H.J.

    2008-01-01

    We use polarization-resolved mid-infrared pump-probe spectroscopy to study the aqueous solvation of proline and N-methylacetamide. These molecules serve as models to study the solvation of proteins. We monitor the orientational dynamics of partly deuterated water molecules (HDO) that are present at

  20. Theories of the solvated electron

    International Nuclear Information System (INIS)

    Kestner, N.R.

    1987-01-01

    In this chapter the authors address only the final state of the electron, that is, the solvated state, which, if no chemical reaction would occur, is a stable entity with well-defined characteristics. Except for some metal-ammonia solutions, and possible a few other cases, such stable species, in reality, exist but a short time (often as short as microseconds). Nevertheless, this chapter only deals with this final time-independent,'' completely solvated,'' equilibrium species. The last statement is added to indicate that the solvent around the electron has also come to thermal equilibrium with the field of the charge

  1. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  2. A model for assessing human cognitive reliability in PRA studies

    International Nuclear Information System (INIS)

    Hannaman, G.W.; Spurgin, A.J.; Lukic, Y.

    1985-01-01

    This paper summarizes the status of a research project sponsored by EPRI as part of the Probabilistic Risk Assessment (PRA) technology improvement program and conducted by NUS Corporation to develop a model of Human Cognitive Reliability (HCR). The model was synthesized from features identified in a review of existing models. The model development was based on the hypothesis that the key factors affecting crew response times are separable. The inputs to the model consist of key parameters the values of which can be determined by PRA analysts for each accident situation being assessed. The output is a set of curves which represent the probability of control room crew non-response as a function of time for different conditions affecting their performance. The non-response probability is then a contributor to the overall non-success of operating crews to achieve a functional objective identified in the PRA study. Simulator data and some small scale tests were utilized to illustrate the calibration of interim HCR model coefficients for different types of cognitive processing since the data were sparse. The model can potentially help PRA analysts make human reliability assessments more explicit. The model incorporates concepts from psychological models of human cognitive behavior, information from current collections of human reliability data sources and crew response time data from simulator training exercises

  3. The reliability of the Adelaide in-shoe foot model.

    Science.gov (United States)

    Bishop, Chris; Hillier, Susan; Thewlis, Dominic

    2017-07-01

    Understanding the biomechanics of the foot is essential for many areas of research and clinical practice such as orthotic interventions and footwear development. Despite the widespread attention paid to the biomechanics of the foot during gait, what largely remains unknown is how the foot moves inside the shoe. This study investigated the reliability of the Adelaide In-Shoe Foot Model, which was designed to quantify in-shoe foot kinematics and kinetics during walking. Intra-rater reliability was assessed in 30 participants over five walking trials whilst wearing shoes during two data collection sessions, separated by one week. Sufficient reliability for use was interpreted as a coefficient of multiple correlation and intra-class correlation coefficient of >0.61. Inter-rater reliability was investigated separately in a second sample of 10 adults by two researchers with experience in applying markers for the purpose of motion analysis. The results indicated good consistency in waveform estimation for most kinematic and kinetic data, as well as good inter-and intra-rater reliability. The exception is the peak medial ground reaction force, the minimum abduction angle and the peak abduction/adduction external hindfoot joint moments which resulted in less than acceptable repeatability. Based on our results, the Adelaide in-shoe foot model can be used with confidence for 24 commonly measured biomechanical variables during shod walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Modeling vapor pressures of solvent systems with and without a salt effect: An extension of the LSER approach

    International Nuclear Information System (INIS)

    Senol, Aynur

    2015-01-01

    Highlights: • A new polynomial vapor pressure approach for pure solvents is presented. • Solvation models reproduce the vapor pressure data within a 4% mean error. • A concentration-basis vapor pressure model is also implemented on relevant systems. • The reliability of existing models was analyzed using log-ratio objective function. - Abstract: A new polynomial vapor pressure approach for pure solvents is presented. The model is incorporated into the LSER (linear solvation energy relation) based solvation model framework and checked for consistency in reproducing experimental vapor pressures of salt-containing solvent systems. The developed two structural forms of the generalized solvation model (Senol, 2013) provide a relatively accurate description of the salting effect on vapor pressure of (solvent + salt) systems. The equilibrium data spanning vapor pressures of eighteen (solvent + salt) and three (solvent (1) + solvent (2) + salt) systems have been subjected to establish the basis for the model reliability analysis using a log-ratio objective function. The examined vapor pressure relations reproduce the observed performance relatively accurately, yielding the overall design factors of 1.084, 1.091 and 1.052 for the integrated property-basis solvation model (USMIP), reduced property-basis solvation model and concentration-dependent model, respectively. Both the integrated property-basis and reduced property-basis solvation models were able to simulate satisfactorily the vapor pressure data of a binary solvent mixture involving a salt, yielding an overall mean error of 5.2%

  5. Modeling of humidity-related reliability in enclosures with electronics

    DEFF Research Database (Denmark)

    Hygum, Morten Arnfeldt; Popok, Vladimir

    2015-01-01

    Reliability of electronics that operate outdoor is strongly affected by environmental factors such as temperature and humidity. Fluctuations of these parameters can lead to water condensation inside enclosures. Therefore, modelling of humidity distribution in a container with air and freely exposed...

  6. Models of Information Security Highly Reliable Computing Systems

    Directory of Open Access Journals (Sweden)

    Vsevolod Ozirisovich Chukanov

    2016-03-01

    Full Text Available Methods of the combined reservation are considered. The models of reliability of systems considering parameters of restoration and prevention of blocks of system are described. Ratios for average quantity prevention and an availability quotient of blocks of system are given.

  7. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  8. Rotation and solvation of ammonium ion

    International Nuclear Information System (INIS)

    Perrin, C.L.; Gipe, R.K.

    1987-01-01

    From nitrogen-15 spin-lattice relaxation times and nuclear Overhauser enhancements, the rotational correlations time tau/sub c/ for 15 NH 4 + was determined in s series of solvents. Values of tau/sub c/ range from 0.46 to 20 picoseconds. The solvent dependent of tau/sub c/ cannot be explained in terms of solvent polarity, molecular dipole moment, solvent basicity, solvent dielectric relaxation, or solvent viscosity. The rapid rotation and the variation with solvent can be accounted for by a model that involves hydrogen bonding of an NH proton to more than one solvent molecule in a disordered solvation environment. 25 references, 1 table

  9. Stochastic Differential Equation-Based Flexible Software Reliability Growth Model

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.

  10. Modular reliability modeling of the TJNAF personnel safety system

    International Nuclear Information System (INIS)

    Cinnamon, J.; Mahoney, K.

    1997-01-01

    A reliability model for the Thomas Jefferson National Accelerator Facility (formerly CEBAF) personnel safety system has been developed. The model, which was implemented using an Excel spreadsheet, allows simulation of all or parts of the system. Modularity os the model's implementation allows rapid open-quotes what if open-quotes case studies to simulate change in safety system parameters such as redundancy, diversity, and failure rates. Particular emphasis is given to the prediction of failure modes which would result in the failure of both of the redundant safety interlock systems. In addition to the calculation of the predicted reliability of the safety system, the model also calculates availability of the same system. Such calculations allow the user to make tradeoff studies between reliability and availability, and to target resources to improving those parts of the system which would most benefit from redesign or upgrade. The model includes calculated, manufacturer's data, and Jefferson Lab field data. This paper describes the model, methods used, and comparison of calculated to actual data for the Jefferson Lab personnel safety system. Examples are given to illustrate the model's utility and ease of use

  11. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  12. Reliability modeling and analysis of smart power systems

    CERN Document Server

    Karki, Rajesh; Verma, Ajit Kumar

    2014-01-01

    The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti

  13. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    International Nuclear Information System (INIS)

    Omelyan, Igor; Kovalenko, Andriy

    2013-01-01

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics

  14. A general graphical user interface for automatic reliability modeling

    Science.gov (United States)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  15. Photovoltaic Reliability Performance Model v 2.0

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-16

    PV-RPM is intended to address more “real world” situations by coupling a photovoltaic system performance model with a reliability model so that inverters, modules, combiner boxes, etc. can experience failures and be repaired (or left unrepaired). The model can also include other effects, such as module output degradation over time or disruptions such as electrical grid outages. In addition, PV-RPM is a dynamic probabilistic model that can be used to run many realizations (i.e., possible future outcomes) of a system’s performance using probability distributions to represent uncertain parameter inputs.

  16. Bring Your Own Device - Providing Reliable Model of Data Access

    Directory of Open Access Journals (Sweden)

    Stąpór Paweł

    2016-10-01

    Full Text Available The article presents a model of Bring Your Own Device (BYOD as a model network, which provides the user reliable access to network resources. BYOD is a model dynamically developing, which can be applied in many areas. Research network has been launched in order to carry out the test, in which as a service of BYOD model Work Folders service was used. This service allows the user to synchronize files between the device and the server. An access to the network is completed through the wireless communication by the 802.11n standard. Obtained results are shown and analyzed in this article.

  17. Dipole moments of molecules solvated in helium nanodroplets

    International Nuclear Information System (INIS)

    Stiles, Paul L.; Nauta, Klaas; Miller, Roger E.

    2003-01-01

    Stark spectra are reported for hydrogen cyanide and cyanoacetylene solvated in helium nanodroplets. The goal of this study is to understand the influence of the helium solvent on measurements of the permanent electric dipole moment of a molecule. We find that the dipole moments of the helium solvated molecules, calculated assuming the electric field is the same as in vacuum, are slightly smaller than the well-known gas-phase dipole moments of HCN and HCCCN. A simple elliptical cavity model quantitatively accounts for this difference, which arises from the dipole-induced polarization of the helium

  18. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Science.gov (United States)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  19. Structural reliability in context of statistical uncertainties and modelling discrepancies

    International Nuclear Information System (INIS)

    Pendola, Maurice

    2000-01-01

    Structural reliability methods have been largely improved during the last years and have showed their ability to deal with uncertainties during the design stage or to optimize the functioning and the maintenance of industrial installations. They are based on a mechanical modeling of the structural behavior according to the considered failure modes and on a probabilistic representation of input parameters of this modeling. In practice, only limited statistical information is available to build the probabilistic representation and different sophistication levels of the mechanical modeling may be introduced. Thus, besides the physical randomness, other uncertainties occur in such analyses. The aim of this work is triple: 1. at first, to propose a methodology able to characterize the statistical uncertainties due to the limited number of data in order to take them into account in the reliability analyses. The obtained reliability index measures the confidence in the structure considering the statistical information available. 2. Then, to show a methodology leading to reliability results evaluated from a particular mechanical modeling but by using a less sophisticated one. The objective is then to decrease the computational efforts required by the reference modeling. 3. Finally, to propose partial safety factors that are evolving as a function of the number of statistical data available and as a function of the sophistication level of the mechanical modeling that is used. The concepts are illustrated in the case of a welded pipe and in the case of a natural draught cooling tower. The results show the interest of the methodologies in an industrial context. [fr

  20. Reliability assessment of competing risks with generalized mixed shock models

    International Nuclear Information System (INIS)

    Rafiee, Koosha; Feng, Qianmei; Coit, David W.

    2017-01-01

    This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.

  1. Testing the reliability of ice-cream cone model

    Science.gov (United States)

    Pan, Zonghao; Shen, Chenglong; Wang, Chuanbing; Liu, Kai; Xue, Xianghui; Wang, Yuming; Wang, Shui

    2015-04-01

    Coronal Mass Ejections (CME)'s properties are important to not only the physical scene itself but space-weather prediction. Several models (such as cone model, GCS model, and so on) have been raised to get rid of the projection effects within the properties observed by spacecraft. According to SOHO/ LASCO observations, we obtain the 'real' 3D parameters of all the FFHCMEs (front-side full halo Coronal Mass Ejections) within the 24th solar cycle till July 2012, by the ice-cream cone model. Considering that the method to obtain 3D parameters from the CME observations by multi-satellite and multi-angle has higher accuracy, we use the GCS model to obtain the real propagation parameters of these CMEs in 3D space and compare the results with which by ice-cream cone model. Then we could discuss the reliability of the ice-cream cone model.

  2. Creation and Reliability Analysis of Vehicle Dynamic Weighing Model

    Directory of Open Access Journals (Sweden)

    Zhi-Ling XU

    2014-08-01

    Full Text Available In this paper, it is modeled by using ADAMS to portable axle load meter of dynamic weighing system, controlling a single variable simulation weighing process, getting the simulation weighing data under the different speed and weight; simultaneously using portable weighing system with the same parameters to achieve the actual measurement, comparative analysis the simulation results under the same conditions, at 30 km/h or less, the simulation value and the measured value do not differ by more than 5 %, it is not only to verify the reliability of dynamic weighing model, but also to create possible for improving algorithm study efficiency by using dynamic weighing model simulation.

  3. Human Performance Modeling for Dynamic Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory

    2015-08-01

    Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.

  4. Imperfect Preventive Maintenance Model Study Based On Reliability Limitation

    Directory of Open Access Journals (Sweden)

    Zhou Qian

    2016-01-01

    Full Text Available Effective maintenance is crucial for equipment performance in industry. Imperfect maintenance conform to actual failure process. Taking the dynamic preventive maintenance cost into account, the preventive maintenance model was constructed by using age reduction factor. The model regards the minimization of repair cost rate as final target. It use allowed smallest reliability as the replacement condition. Equipment life was assumed to follow two parameters Weibull distribution since it was one of the most commonly adopted distributions to fit cumulative failure problems. Eventually the example verifies the rationality and benefits of the model.

  5. Modelling and estimating degradation processes with application in structural reliability

    International Nuclear Information System (INIS)

    Chiquet, J.

    2007-06-01

    The characteristic level of degradation of a given structure is modeled through a stochastic process called the degradation process. The random evolution of the degradation process is governed by a differential system with Markovian environment. We put the associated reliability framework by considering the failure of the structure once the degradation process reaches a critical threshold. A closed form solution of the reliability function is obtained thanks to Markov renewal theory. Then, we build an estimation methodology for the parameters of the stochastic processes involved. The estimation methods and the theoretical results, as well as the associated numerical algorithms, are validated on simulated data sets. Our method is applied to the modelling of a real degradation mechanism, known as crack growth, for which an experimental data set is considered. (authors)

  6. Comparison between implicit and hybrid solvation methods for the ...

    Indian Academy of Sciences (India)

    Administrator

    Both implicit solvation method (dielectric polarizable continuum model, DPCM) and hybrid ... the free energy change (ΔGsol) as per the PCM ... Here the gas phase change is written as ΔGg = ΔEelec + ..... bution to the field of electrochemistry.

  7. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  8. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  9. Reliability modelling - PETROBRAS 2010 integrated gas supply chain

    Energy Technology Data Exchange (ETDEWEB)

    Faertes, Denise; Heil, Luciana; Saker, Leonardo; Vieira, Flavia; Risi, Francisco; Domingues, Joaquim; Alvarenga, Tobias; Carvalho, Eduardo; Mussel, Patricia

    2010-09-15

    The purpose of this paper is to present the innovative reliability modeling of Petrobras 2010 integrated gas supply chain. The model represents a challenge in terms of complexity and software robustness. It was jointly developed by PETROBRAS Gas and Power Department and Det Norske Veritas. It was carried out with the objective of evaluating security of supply of 2010 gas network design that was conceived to connect Brazilian Northeast and Southeast regions. To provide best in class analysis, state of the art software was used to quantify the availability and the efficiency of the overall network and its individual components.

  10. Evaluating the reliability of predictions made using environmental transfer models

    International Nuclear Information System (INIS)

    1989-01-01

    The development and application of mathematical models for predicting the consequences of releases of radionuclides into the environment from normal operations in the nuclear fuel cycle and in hypothetical accident conditions has increased dramatically in the last two decades. This Safety Practice publication has been prepared to provide guidance on the available methods for evaluating the reliability of environmental transfer model predictions. It provides a practical introduction of the subject and a particular emphasis has been given to worked examples in the text. It is intended to supplement existing IAEA publications on environmental assessment methodology. 60 refs, 17 figs, 12 tabs

  11. Reliability physics and engineering time-to-failure modeling

    CERN Document Server

    McPherson, J W

    2013-01-01

    Reliability Physics and Engineering provides critically important information that is needed for designing and building reliable cost-effective products. Key features include:  ·       Materials/Device Degradation ·       Degradation Kinetics ·       Time-To-Failure Modeling ·       Statistical Tools ·       Failure-Rate Modeling ·       Accelerated Testing ·       Ramp-To-Failure Testing ·       Important Failure Mechanisms for Integrated Circuits ·       Important Failure Mechanisms for  Mechanical Components ·       Conversion of Dynamic  Stresses into Static Equivalents ·       Small Design Changes Producing Major Reliability Improvements ·       Screening Methods ·       Heat Generation and Dissipation ·       Sampling Plans and Confidence Intervals This textbook includes numerous example problems with solutions. Also, exercise problems along with the answers are included at the end of each chapter. Relia...

  12. Power Electronic Packaging Design, Assembly Process, Reliability and Modeling

    CERN Document Server

    Liu, Yong

    2012-01-01

    Power Electronic Packaging presents an in-depth overview of power electronic packaging design, assembly,reliability and modeling. Since there is a drastic difference between IC fabrication and power electronic packaging, the book systematically introduces typical power electronic packaging design, assembly, reliability and failure analysis and material selection so readers can clearly understand each task's unique characteristics. Power electronic packaging is one of the fastest growing segments in the power electronic industry, due to the rapid growth of power integrated circuit (IC) fabrication, especially for applications like portable, consumer, home, computing and automotive electronics. This book also covers how advances in both semiconductor content and power advanced package design have helped cause advances in power device capability in recent years. The author extrapolates the most recent trends in the book's areas of focus to highlight where further improvement in materials and techniques can d...

  13. Model-based human reliability analysis: prospects and requirements

    International Nuclear Information System (INIS)

    Mosleh, A.; Chang, Y.H.

    2004-01-01

    Major limitations of the conventional methods for human reliability analysis (HRA), particularly those developed for operator response analysis in probabilistic safety assessments (PSA) of nuclear power plants, are summarized as a motivation for the need and a basis for developing requirements for the next generation HRA methods. It is argued that a model-based approach that provides explicit cognitive causal links between operator behaviors and directly or indirectly measurable causal factors should be at the core of the advanced methods. An example of such causal model is briefly reviewed, where due to the model complexity and input requirements can only be currently implemented in a dynamic PSA environment. The computer simulation code developed for this purpose is also described briefly, together with current limitations in the models, data, and the computer implementation

  14. Cation solvation with quantum chemical effects modeled by a size-consistent multi-partitioning quantum mechanics/molecular mechanics method.

    Science.gov (United States)

    Watanabe, Hiroshi C; Kubillus, Maximilian; Kubař, Tomáš; Stach, Robert; Mizaikoff, Boris; Ishikita, Hiroshi

    2017-07-21

    In the condensed phase, quantum chemical properties such as many-body effects and intermolecular charge fluctuations are critical determinants of the solvation structure and dynamics. Thus, a quantum mechanical (QM) molecular description is required for both solute and solvent to incorporate these properties. However, it is challenging to conduct molecular dynamics (MD) simulations for condensed systems of sufficient scale when adapting QM potentials. To overcome this problem, we recently developed the size-consistent multi-partitioning (SCMP) quantum mechanics/molecular mechanics (QM/MM) method and realized stable and accurate MD simulations, using the QM potential to a benchmark system. In the present study, as the first application of the SCMP method, we have investigated the structures and dynamics of Na + , K + , and Ca 2+ solutions based on nanosecond-scale sampling, a sampling 100-times longer than that of conventional QM-based samplings. Furthermore, we have evaluated two dynamic properties, the diffusion coefficient and difference spectra, with high statistical certainty. Furthermore the calculation of these properties has not previously been possible within the conventional QM/MM framework. Based on our analysis, we have quantitatively evaluated the quantum chemical solvation effects, which show distinct differences between the cations.

  15. Stochastic process corrosion growth models for pipeline reliability

    International Nuclear Information System (INIS)

    Bazán, Felipe Alexander Vargas; Beck, André Teófilo

    2013-01-01

    Highlights: •Novel non-linear stochastic process corrosion growth model is proposed. •Corrosion rate modeled as random Poisson pulses. •Time to corrosion initiation and inherent time-variability properly represented. •Continuous corrosion growth histories obtained. •Model is shown to precisely fit actual corrosion data at two time points. -- Abstract: Linear random variable corrosion models are extensively employed in reliability analysis of pipelines. However, linear models grossly neglect well-known characteristics of the corrosion process. Herein, a non-linear model is proposed, where corrosion rate is represented as a Poisson square wave process. The resulting model represents inherent time-variability of corrosion growth, produces continuous growth and leads to mean growth at less-than-one power of time. Different corrosion models are adjusted to the same set of actual corrosion data for two inspections. The proposed non-linear random process corrosion growth model leads to the best fit to the data, while better representing problem physics

  16. Do downscaled general circulation models reliably simulate historical climatic conditions?

    Science.gov (United States)

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2018-01-01

    The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.

  17. Using the Weibull distribution reliability, modeling and inference

    CERN Document Server

    McCool, John I

    2012-01-01

    Understand and utilize the latest developments in Weibull inferential methods While the Weibull distribution is widely used in science and engineering, most engineers do not have the necessary statistical training to implement the methodology effectively. Using the Weibull Distribution: Reliability, Modeling, and Inference fills a gap in the current literature on the topic, introducing a self-contained presentation of the probabilistic basis for the methodology while providing powerful techniques for extracting information from data. The author explains the use of the Weibull distribution

  18. Understanding software faults and their role in software reliability modeling

    Science.gov (United States)

    Munson, John C.

    1994-01-01

    This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the

  19. Reliable low precision simulations in land surface models

    Science.gov (United States)

    Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.

    2017-12-01

    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.

  20. Reliable critical sized defect rodent model for cleft palate research.

    Science.gov (United States)

    Mostafa, Nesrine Z; Doschak, Michael R; Major, Paul W; Talwar, Reena

    2014-12-01

    Suitable animal models are necessary to test the efficacy of new bone grafting therapies in cleft palate surgery. Rodent models of cleft palate are available but have limitations. This study compared and modified mid-palate cleft (MPC) and alveolar cleft (AC) models to determine the most reliable and reproducible model for bone grafting studies. Published MPC model (9 × 5 × 3 mm(3)) lacked sufficient information for tested rats. Our initial studies utilizing AC model (7 × 4 × 3 mm(3)) in 8 and 16 weeks old Sprague Dawley (SD) rats revealed injury to adjacent structures. After comparing anteroposterior and transverse maxillary dimensions in 16 weeks old SD and Wistar rats, virtual planning was performed to modify MPC and AC defects dimensions, taking the adjacent structures into consideration. Modified MPC (7 × 2.5 × 1 mm(3)) and AC (5 × 2.5 × 1 mm(3)) defects were employed in 16 weeks old Wistar rats and healing was monitored by micro-computed tomography and histology. Maxillary dimensions in SD and Wistar rats were not significantly different. Preoperative virtual planning enhanced postoperative surgical outcomes. Bone healing occurred at defect margin leaving central bone void confirming the critical size nature of the modified MPC and AC defects. Presented modifications for MPC and AC models created clinically relevant and reproducible defects. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  1. Usage models in reliability assessment of software-based systems

    Energy Technology Data Exchange (ETDEWEB)

    Haapanen, P.; Pulkkinen, U. [VTT Automation, Espoo (Finland); Korhonen, J. [VTT Electronics, Espoo (Finland)

    1997-04-01

    This volume in the OHA-project report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in the OHA-project report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. In this report the issues related to the statistical testing and especially automated test case generation are considered. The goal is to find an efficient method for building usage models for the generation of statistically significant set of test cases and to gather practical experiences from this method by applying it in a case study. The scope of the study also includes the tool support for the method, as the models may grow quite large and complex. (32 refs., 30 figs.).

  2. Reliability model for offshore wind farms; Paalidelighedsmodel for havvindmoelleparker

    Energy Technology Data Exchange (ETDEWEB)

    Christensen, P.; Lundtang Paulsen, J.; Lybech Toegersen, M.; Krogh, T. [Risoe National Lab., Roskilde (Denmark); Raben, N.; Donovan, M.H.; Joergensen, L. [SEAS (Denmark); Winther-Jensen, M.

    2002-05-01

    A method for the prediction of the mean availability for an offshore windfarm has been developed. Factors comprised are the reliability of the single turbine, the strategy for preventive maintenance the climate, the number of repair teams, and the type of boats available for transport. The mean availability is defined as the sum of the fractions of time, where each turbine is available for production. The project has been carried out together with SEAS Wind Technique, and their site Roedsand has been chosen as the example of the work. A climate model has been created based on actual site measurements. The prediction of the availability is done with a Monte Carlo-simulation. Software was developed for the preparation of the climate model from weather measurements as well as for the Monte carlo-simulation. Three examples have been simulated, one with guessed parametres, and the other two with parameters more close to the Roedsand case. (au)

  3. Usage models in reliability assessment of software-based systems

    International Nuclear Information System (INIS)

    Haapanen, P.; Pulkkinen, U.; Korhonen, J.

    1997-04-01

    This volume in the OHA-project report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in the OHA-project report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. In this report the issues related to the statistical testing and especially automated test case generation are considered. The goal is to find an efficient method for building usage models for the generation of statistically significant set of test cases and to gather practical experiences from this method by applying it in a case study. The scope of the study also includes the tool support for the method, as the models may grow quite large and complex. (32 refs., 30 figs.)

  4. Updated Abraham solvation parameters for polychlorinated biphenyls

    NARCIS (Netherlands)

    van Noort, P.C.M.; Haftka, J.J.H.; Parsons, J.R.

    2010-01-01

    This study shows that the recently published polychlorinated biphenyl (PCB) Abraham solvation parameters predict PCB air−n-hexadecane and n-octanol−water partition coefficients very poorly, especially for highly ortho-chlorinated congeners. Therefore, an updated set of PCB solvation parameters was

  5. Updated Abraham solvation parameters for polychlorinated biphenyls

    NARCIS (Netherlands)

    Noort, van P.C.M.; Haftka, J.J.H.; Parsons, J.R.

    2010-01-01

    This study shows that the recently published polychlorinated biphenyl (PCB) Abraham solvation parameters predict PCB air-n-hexadecane and n-octanol-water partition coefficients very poorly, especially for highly ortho-chlorinated congeners. Therefore, an updated set of PCB solvation parameters was

  6. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    Science.gov (United States)

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  7. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  8. Reliability assessment using degradation models: bayesian and classical approaches

    Directory of Open Access Journals (Sweden)

    Marta Afonso Freitas

    2010-04-01

    Full Text Available Traditionally, reliability assessment of devices has been based on (accelerated life tests. However, for highly reliable products, little information about reliability is provided by life tests in which few or no failures are typically observed. Since most failures arise from a degradation mechanism at work for which there are characteristics that degrade over time, one alternative is monitor the device for a period of time and assess its reliability from the changes in performance (degradation observed during that period. The goal of this article is to illustrate how degradation data can be modeled and analyzed by using "classical" and Bayesian approaches. Four methods of data analysis based on classical inference are presented. Next we show how Bayesian methods can also be used to provide a natural approach to analyzing degradation data. The approaches are applied to a real data set regarding train wheels degradation.Tradicionalmente, o acesso à confiabilidade de dispositivos tem sido baseado em testes de vida (acelerados. Entretanto, para produtos altamente confiáveis, pouca informação a respeito de sua confiabilidade é fornecida por testes de vida no quais poucas ou nenhumas falhas são observadas. Uma vez que boa parte das falhas é induzida por mecanismos de degradação, uma alternativa é monitorar o dispositivo por um período de tempo e acessar sua confiabilidade através das mudanças em desempenho (degradação observadas durante aquele período. O objetivo deste artigo é ilustrar como dados de degradação podem ser modelados e analisados utilizando-se abordagens "clássicas" e Bayesiana. Quatro métodos de análise de dados baseados em inferência clássica são apresentados. A seguir, mostramos como os métodos Bayesianos podem também ser aplicados para proporcionar uma abordagem natural à análise de dados de degradação. As abordagens são aplicadas a um banco de dados real relacionado à degradação de rodas de trens.

  9. A Dialogue about MCQs, Reliability, and Item Response Modelling

    Science.gov (United States)

    Wright, Daniel B.; Skagerberg, Elin M.

    2006-01-01

    Multiple choice questions (MCQs) are becoming more common in UK psychology departments and the need to assess their reliability is apparent. Having examined the reliability of MCQs in our department we faced many questions from colleagues about why we were examining reliability, what it was that we were doing, and what should be reported when…

  10. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  11. Modeling high-Power Accelerators Reliability-SNS LINAC (SNS-ORNL); MAX LINAC (MYRRHA)

    International Nuclear Information System (INIS)

    Pitigoi, A. E.; Fernandez Ramos, P.

    2013-01-01

    Improving reliability has recently become a very important objective in the field of particle accelerators. The particle accelerators in operation are constantly undergoing modifications, and improvements are implemented using new technologies, more reliable components or redundant schemes (to obtain more reliability, strength, more power, etc.) A reliability model of SNS (Spallation Neutron Source) LINAC has been developed within MAX project and analysis of the accelerator systems reliability has been performed within the MAX project, using the Risk Spectrum reliability analysis software. The analysis results have been evaluated by comparison with the SNS operational data. Results and conclusions are presented in this paper, oriented to identify design weaknesses and provide recommendations for improving reliability of MYRRHA linear accelerator. The SNS reliability model developed for the MAX preliminary design phase indicates possible avenues for further investigation that could be needed to improve the reliability of the high-power accelerators, in view of the future reliability targets of ADS accelerators.

  12. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    OpenAIRE

    Chassin, David P.; Posse, Christian

    2004-01-01

    The reliability of electric transmission systems is examined using a scale-free model of network structure and failure propagation. The topologies of the North American eastern and western electric networks are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using s...

  13. Electrical resistivities and solvation enthalpies for solutions of salts in liquid alkali metals

    International Nuclear Information System (INIS)

    Hubberstey, P.; Dadd, A.T.

    1982-01-01

    An empirical correlation is shown to exist between the resistivity coefficients drho/dc for solutes in liquid alkali metals and the corresponding solvation enthalpies Usub(solvn) of the neutral gaseous solute species. Qualitative arguments based on an electrostatic solvation model in which the negative solute atom is surrounded by a solvation sphere of positive solvent ion cores are used to show that both parameters are dependent on the charge density of the solute atom and hence on the extent of charge transfer from solvent to solute. Thus as the charge density of the solute increases, the solvation enthalpy increases regularly and the resistivity coefficients pass through a maximum to give the observed approximately parabolic drho/dc versus Usub(solvn) relationship. (Auth.)

  14. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    Science.gov (United States)

    Duffy, Stephen F.

    1997-01-01

    Al single crystal turbine blade material; map a simplistic failure strength envelope of the material; develop a statistically based reliability computer algorithm, verify the reliability model and computer algorithm, and model stator vanes for rig tests. Thus establishing design protocols that enable the engineer to analyze and predict the mechanical behavior of ceramic composites and intermetallics would mitigate the prototype (trial and error) approach currently used by the engineering community. The primary objective of the research effort supported by this short term grant is the continued creation of enabling technologies for the macroanalysis of components fabricated from ceramic composites and intermetallic material systems. The creation of enabling technologies aids in shortening the product development cycle of components fabricated from the new high technology materials.

  15. Cluster expansion of the solvation free energy difference: Systematic improvements in the solvation of single ions

    Science.gov (United States)

    Pliego, Josefredo R.

    2017-07-01

    The cluster expansion method has been used in the imperfect gas theory for several decades. This paper proposes a cluster expansion of the solvation free energy difference. This difference, which results from a change in the solute-solvent potential energy, can be written as the logarithm of a finite series. Similar to the Mayer function, the terms in the series are related to configurational integrals, which makes the integrand relevant only for configurations of the solvent molecules close to the solute. In addition, the terms involve interaction of solute with one, two, and so on solvent molecules. The approach could be used for hybrid quantum mechanical and molecular mechanics methods or mixed cluster-continuum approximation. A simple form of the theory was applied for prediction of pKa in methanol; the results indicated that three explicit methanol molecules and the dielectric continuum lead to a root of mean squared error (RMSE) of only 1.3 pKa units, whereas the pure continuum solvation model based on density method leads to a RMSE of 6.6 pKa units.

  16. Avoiding fractional electrons in subsystem DFT based ab-initio molecular dynamics yields accurate models for liquid water and solvated OH radical.

    Science.gov (United States)

    Genova, Alessandro; Ceresoli, Davide; Pavanello, Michele

    2016-06-21

    In this work we achieve three milestones: (1) we present a subsystem DFT method capable of running ab-initio molecular dynamics simulations accurately and efficiently. (2) In order to rid the simulations of inter-molecular self-interaction error, we exploit the ability of semilocal frozen density embedding formulation of subsystem DFT to represent the total electron density as a sum of localized subsystem electron densities that are constrained to integrate to a preset, constant number of electrons; the success of the method relies on the fact that employed semilocal nonadditive kinetic energy functionals effectively cancel out errors in semilocal exchange-correlation potentials that are linked to static correlation effects and self-interaction. (3) We demonstrate this concept by simulating liquid water and solvated OH(•) radical. While the bulk of our simulations have been performed on a periodic box containing 64 independent water molecules for 52 ps, we also simulated a box containing 256 water molecules for 22 ps. The results show that, provided one employs an accurate nonadditive kinetic energy functional, the dynamics of liquid water and OH(•) radical are in semiquantitative agreement with experimental results or higher-level electronic structure calculations. Our assessments are based upon comparisons of radial and angular distribution functions as well as the diffusion coefficient of the liquid.

  17. Avoiding fractional electrons in subsystem DFT based ab-initio molecular dynamics yields accurate models for liquid water and solvated OH radical

    International Nuclear Information System (INIS)

    Genova, Alessandro; Pavanello, Michele; Ceresoli, Davide

    2016-01-01

    In this work we achieve three milestones: (1) we present a subsystem DFT method capable of running ab-initio molecular dynamics simulations accurately and efficiently. (2) In order to rid the simulations of inter-molecular self-interaction error, we exploit the ability of semilocal frozen density embedding formulation of subsystem DFT to represent the total electron density as a sum of localized subsystem electron densities that are constrained to integrate to a preset, constant number of electrons; the success of the method relies on the fact that employed semilocal nonadditive kinetic energy functionals effectively cancel out errors in semilocal exchange–correlation potentials that are linked to static correlation effects and self-interaction. (3) We demonstrate this concept by simulating liquid water and solvated OH • radical. While the bulk of our simulations have been performed on a periodic box containing 64 independent water molecules for 52 ps, we also simulated a box containing 256 water molecules for 22 ps. The results show that, provided one employs an accurate nonadditive kinetic energy functional, the dynamics of liquid water and OH • radical are in semiquantitative agreement with experimental results or higher-level electronic structure calculations. Our assessments are based upon comparisons of radial and angular distribution functions as well as the diffusion coefficient of the liquid.

  18. Approach for an integral power transformer reliability model

    NARCIS (Netherlands)

    Schijndel, van A.; Wouters, P.A.A.F.; Steennis, E.F.; Wetzer, J.M.

    2012-01-01

    In electrical power transmission and distribution networks power transformers represent a crucial group of assets both in terms of reliability and investments. In order to safeguard the required quality at acceptable costs, decisions must be based on a reliable forecast of future behaviour. The aim

  19. Wireless Channel Modeling Perspectives for Ultra-Reliable Communications

    DEFF Research Database (Denmark)

    Eggers, Patrick Claus F.; Popovski, Petar

    2018-01-01

    Ultra-Reliable Communication (URC) is one of the distinctive features of the upcoming 5G wireless communication. The level of reliability, going down to packet error rates (PER) of $10^{-9}$, should be sufficiently convincing in order to remove cables in an industrial setting or provide remote co...

  20. Improvements to the APBS biomolecular solvation software suite.

    Science.gov (United States)

    Jurrus, Elizabeth; Engel, Dave; Star, Keith; Monson, Kyle; Brandi, Juan; Felberg, Lisa E; Brookes, David H; Wilson, Leighton; Chen, Jiahui; Liles, Karina; Chun, Minju; Li, Peter; Gohara, David W; Dolinsky, Todd; Konecny, Robert; Koes, David R; Nielsen, Jens Erik; Head-Gordon, Teresa; Geng, Weihua; Krasny, Robert; Wei, Guo-Wei; Holst, Michael J; McCammon, J Andrew; Baker, Nathan A

    2018-01-01

    The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve the equations of continuum electrostatics for large biomolecular assemblages that have provided impact in the study of a broad range of chemical, biological, and biomedical applications. APBS addresses the three key technology challenges for understanding solvation and electrostatics in biomedical applications: accurate and efficient models for biomolecular solvation and electrostatics, robust and scalable software for applying those theories to biomolecular systems, and mechanisms for sharing and analyzing biomolecular electrostatics data in the scientific community. To address new research applications and advancing computational capabilities, we have continually updated APBS and its suite of accompanying software since its release in 2001. In this article, we discuss the models and capabilities that have recently been implemented within the APBS software package including a Poisson-Boltzmann analytical and a semi-analytical solver, an optimized boundary element solver, a geometry-based geometric flow solvation model, a graph theory-based algorithm for determining pK a values, and an improved web-based visualization tool for viewing electrostatics. © 2017 The Protein Society.

  1. Reliability of multi-model and structurally different single-model ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Yokohata, Tokuta [National Institute for Environmental Studies, Center for Global Environmental Research, Tsukuba, Ibaraki (Japan); Annan, James D.; Hargreaves, Julia C. [Japan Agency for Marine-Earth Science and Technology, Research Institute for Global Change, Yokohama, Kanagawa (Japan); Collins, Matthew [University of Exeter, College of Engineering, Mathematics and Physical Sciences, Exeter (United Kingdom); Jackson, Charles S.; Tobis, Michael [The University of Texas at Austin, Institute of Geophysics, 10100 Burnet Rd., ROC-196, Mail Code R2200, Austin, TX (United States); Webb, Mark J. [Met Office Hadley Centre, Exeter (United Kingdom)

    2012-08-15

    The performance of several state-of-the-art climate model ensembles, including two multi-model ensembles (MMEs) and four structurally different (perturbed parameter) single model ensembles (SMEs), are investigated for the first time using the rank histogram approach. In this method, the reliability of a model ensemble is evaluated from the point of view of whether the observations can be regarded as being sampled from the ensemble. Our analysis reveals that, in the MMEs, the climate variables we investigated are broadly reliable on the global scale, with a tendency towards overdispersion. On the other hand, in the SMEs, the reliability differs depending on the ensemble and variable field considered. In general, the mean state and historical trend of surface air temperature, and mean state of precipitation are reliable in the SMEs. However, variables such as sea level pressure or top-of-atmosphere clear-sky shortwave radiation do not cover a sufficiently wide range in some. It is not possible to assess whether this is a fundamental feature of SMEs generated with particular model, or a consequence of the algorithm used to select and perturb the values of the parameters. As under-dispersion is a potentially more serious issue when using ensembles to make projections, we recommend the application of rank histograms to assess reliability when designing and running perturbed physics SMEs. (orig.)

  2. 1 SUPPLEMENTARY INFORMATION Nonpolar Solvation Dynamics ...

    Indian Academy of Sciences (India)

    IITP

    . S. NP. ( t. ) ( )t. SNeqm. NP. (a). (b). Figure S2. (a) Nonequilibrium solvation response functions calculated after averaging over different number of nonequilibrium trajectories. The response function converges after averaging over more than ...

  3. Models and data requirements for human reliability analysis

    International Nuclear Information System (INIS)

    1989-03-01

    It has been widely recognised for many years that the safety of the nuclear power generation depends heavily on the human factors related to plant operation. This has been confirmed by the accidents at Three Mile Island and Chernobyl. Both these cases revealed how human actions can defeat engineered safeguards and the need for special operator training to cover the possibility of unexpected plant conditions. The importance of the human factor also stands out in the analysis of abnormal events and insights from probabilistic safety assessments (PSA's), which reveal a large proportion of cases having their origin in faulty operator performance. A consultants' meeting, organized jointly by the International Atomic Energy Agency (IAEA) and the International Institute for Applied Systems Analysis (IIASA) was held at IIASA in Laxenburg, Austria, December 7-11, 1987, with the aim of reviewing existing models used in Probabilistic Safety Assessment (PSA) for Human Reliability Analysis (HRA) and of identifying the data required. The report collects both the contributions offered by the members of the Expert Task Force and the findings of the extensive discussions that took place during the meeting. Refs, figs and tabs

  4. Solvated electron structure in glassy matrices

    International Nuclear Information System (INIS)

    Kevan, L.

    1981-01-01

    Current knowledge of the detailed geometrical structure of solvated electrons in aqueous and organic media is summarized. The geometry of solvated electrons in glassy methanol, ethanol, and 2-methyltetrahydrofuran is discussed. Advanced electron magnetic resonance methods and development of new methods of analysis of electron spin echo modulation patterns, second moment line shapes, and forbidden photon spin-flip transitions for paramagnetic species in these disordered systems are discussed. 66 references are cited

  5. Integrating software reliability concepts into risk and reliability modeling of digital instrumentation and control systems used in nuclear power plants

    International Nuclear Information System (INIS)

    Arndt, S. A.

    2006-01-01

    As software-based digital systems are becoming more and more common in all aspects of industrial process control, including the nuclear power industry, it is vital that the current state of the art in quality, reliability, and safety analysis be advanced to support the quantitative review of these systems. Several research groups throughout the world are working on the development and assessment of software-based digital system reliability methods and their applications in the nuclear power, aerospace, transportation, and defense industries. However, these groups are hampered by the fact that software experts and probabilistic safety assessment experts view reliability engineering very differently. This paper discusses the characteristics of a common vocabulary and modeling framework. (authors)

  6. Reliability Analysis of Wireless Sensor Networks Using Markovian Model

    Directory of Open Access Journals (Sweden)

    Jin Zhu

    2012-01-01

    Full Text Available This paper investigates reliability analysis of wireless sensor networks whose topology is switching among possible connections which are governed by a Markovian chain. We give the quantized relations between network topology, data acquisition rate, nodes' calculation ability, and network reliability. By applying Lyapunov method, sufficient conditions of network reliability are proposed for such topology switching networks with constant or varying data acquisition rate. With the conditions satisfied, the quantity of data transported over wireless network node will not exceed node capacity such that reliability is ensured. Our theoretical work helps to provide a deeper understanding of real-world wireless sensor networks, which may find its application in the fields of network design and topology control.

  7. Modeling, implementation, and validation of arterial travel time reliability : [summary].

    Science.gov (United States)

    2013-11-01

    Travel time reliability (TTR) has been proposed as : a better measure of a facilitys performance than : a statistical measure like peak hour demand. TTR : is based on more information about average traffic : flows and longer time periods, thus inc...

  8. Modeling, implementation, and validation of arterial travel time reliability.

    Science.gov (United States)

    2013-11-01

    Previous research funded by Florida Department of Transportation (FDOT) developed a method for estimating : travel time reliability for arterials. This method was not initially implemented or validated using field data. This : project evaluated and r...

  9. Study of redundant Models in reliability prediction of HXMT's HES

    International Nuclear Information System (INIS)

    Wang Jinming; Liu Congzhan; Zhang Zhi; Ji Jianfeng

    2010-01-01

    Two redundant equipment structures of HXMT's HES are proposed firstly, the block backup and dual system cold-redundancy. Then prediction of the reliability is made by using parts count method. Research of comparison and analysis is also performed on the two proposals. A conclusion is drawn that a higher reliability and longer service life could be offered by taking a redundant equipment structure of block backup. (authors)

  10. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  11. Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger

    International Nuclear Information System (INIS)

    Chen Debiao; Yang Xinglin; Li Yuan; Li Jin

    2012-01-01

    High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)

  12. 78 FR 45447 - Revisions to Modeling, Data, and Analysis Reliability Standard

    Science.gov (United States)

    2013-07-29

    ...; Order No. 782] Revisions to Modeling, Data, and Analysis Reliability Standard AGENCY: Federal Energy... Analysis (MOD) Reliability Standard MOD- 028-2, submitted to the Commission for approval by the North... Organization. The Commission finds that the proposed Reliability Standard represents an improvement over the...

  13. Enhanced free energy of extraction of Eu3+ and Am3+ ions towards diglycolamide appended calix[4]arene: insights from DFT-D3 and COSMO-RS solvation models.

    Science.gov (United States)

    Ali, Sk Musharaf

    2017-08-22

    Density functional theory in conjunction with COSMO and COSMO-RS solvation models employing dispersion correction (DFT-D3) has been applied to gain an insight into the complexation of Eu 3+ /Am 3+ with diglycolamide (DGA) and calix[4]arene appended diglycolamide (CAL4DGA) in ionic liquids by studying structures, energetics, thermodynamics and population analysis. The calculated Gibbs free energy for both Eu 3+ and Am 3+ ions with DGA was found to be smaller than that with CAL4DGA. The entropy of complexation was also found to be reduced to a large extent with DGA compared to complexation with CAL4DGA. The solution phase free energy was found to be negative and was higher for Eu 3+ ion. The entropy of complexation was not only found to be further reduced but also became negative in the case of DGA alone. Though the entropy was found to be negative it could not outweigh the high negative enthalpic contribution. The same trend was observed in the solution where the free energy of extraction, ΔG, for Eu 3+ ions was shown to be higher than that for Am 3+ ions towards free DGA. But the values of ΔG and ΔΔG(= ΔG Eu -ΔG Am ) were found to be much higher with CAL4DGA (-12.58 kcal mol -1 ) in the presence of nitrate ions compared to DGA (-1.69 kcal mol -1 ) due to enhanced electronic interaction and positive entropic contribution. Furthermore, both the COSMO and COSMO-RS models predict very close values of ΔΔΔG (= ΔΔG CAL4DGA - ΔΔG nDGA ), indicating that both solvation models could be applied for evaluating the metal ion selectivity. The value of the reaction free energy was found to be higher after dispersion correction. The charge on the Eu and Am atoms for the complexes with DGA and CAL4DGA indicates the charge-dipole type interaction leading to strong binding energy. The present theoretical results support the experimental findings and thus might be of importance in the design of functionalized ligands.

  14. Computer Model to Estimate Reliability Engineering for Air Conditioning Systems

    International Nuclear Information System (INIS)

    Afrah Al-Bossly, A.; El-Berry, A.; El-Berry, A.

    2012-01-01

    Reliability engineering is used to predict the performance and optimize design and maintenance of air conditioning systems. Air conditioning systems are expose to a number of failures. The failures of an air conditioner such as turn on, loss of air conditioner cooling capacity, reduced air conditioning output temperatures, loss of cool air supply and loss of air flow entirely can be due to a variety of problems with one or more components of an air conditioner or air conditioning system. Forecasting for system failure rates are very important for maintenance. This paper focused on the reliability of the air conditioning systems. Statistical distributions that were commonly applied in reliability settings: the standard (2 parameter) Weibull and Gamma distributions. After distributions parameters had been estimated, reliability estimations and predictions were used for evaluations. To evaluate good operating condition in a building, the reliability of the air conditioning system that supplies conditioned air to the several The company's departments. This air conditioning system is divided into two, namely the main chilled water system and the ten air handling systems that serves the ten departments. In a chilled-water system the air conditioner cools water down to 40-45 degree F (4-7 degree C). The chilled water is distributed throughout the building in a piping system and connected to air condition cooling units wherever needed. Data analysis has been done with support a computer aided reliability software, this is due to the Weibull and Gamma distributions indicated that the reliability for the systems equal to 86.012% and 77.7% respectively. A comparison between the two important families of distribution functions, namely, the Weibull and Gamma families was studied. It was found that Weibull method performed for decision making.

  15. Possibilities and limitations of applying software reliability growth models to safety-critical software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2007-01-01

    It is generally known that software reliability growth models such as the Jelinski-Moranda model and the Goel-Okumoto's Non-Homogeneous Poisson Process (NHPP) model cannot be applied to safety-critical software due to a lack of software failure data. In this paper, by applying two of the most widely known software reliability growth models to sample software failure data, we demonstrate the possibility of using the software reliability growth models to prove the high reliability of safety-critical software. The high sensitivity of a piece of software's reliability to software failure data, as well as a lack of sufficient software failure data, is also identified as a possible limitation when applying the software reliability growth models to safety-critical software

  16. Comparison of solvation dynamics of electrons in four polyols

    Energy Technology Data Exchange (ETDEWEB)

    Lampre, I.; Pernot, P.; Bonin, J. [Laboratoire de Chimie Physique/ELYSE, Universite Paris-Sud 11, UMR 8000, Bat. 349, Orsay F-91405 (France); CNRS, Orsay F-91405 (France); Mostafavi, M. [Laboratoire de Chimie Physique/ELYSE, Universite Paris-Sud 11, UMR 8000, Bat. 349, Orsay F-91405 (France); CNRS, Orsay F-91405 (France)], E-mail: mehran.mostafavi@lcp.u-psud.fr

    2008-10-15

    Using pump-probe transient absorption spectroscopy, we studied the solvation dynamics of the electron in liquid polyalcohols: ethane-1,2-diol, propane-1,2-diol, propane-1,3-diol and propane-1,2,3-triol. Time-resolved absorption spectra ranging from 440 to 720 nm were measured. Our study shows that the excess electron in the diols presents an intense and wide absorption band in the visible and near-IR spectral domain at early time after two-photon ionization of the neat solvent. Then, for the first tens of picoseconds, the electron spectrum shifts toward the blue domain and its bandwidth decreases as the red part of the initial spectrum rapidly drops, while the blue part hardly evolves. In contrast, in the triol, the absorption spectrum of the electron is early situated in the visible range after the pump pulse and then solely evolves in the red part. The Bayesian data analysis of the observed picosecond solvation dynamics with different models is in favor of a heterogeneous continuous relaxation. That is corroborated by the analogy between the change in the absorption band with increasing time or decreasing temperature. That tends to indicate a similar organization disorder of the solvent. Moreover, the electron solvation dynamics is very fast in propane-1,2,3-triol despite its high viscosity and highlight the role of the OH-group in that process.

  17. MTS-MD of Biomolecules Steered with 3D-RISM-KH Mean Solvation Forces Accelerated with Generalized Solvation Force Extrapolation.

    Science.gov (United States)

    Omelyan, Igor; Kovalenko, Andriy

    2015-04-14

    We developed a generalized solvation force extrapolation (GSFE) approach to speed up multiple time step molecular dynamics (MTS-MD) of biomolecules steered with mean solvation forces obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model with the Kovalenko-Hirata closure). GSFE is based on a set of techniques including the non-Eckart-like transformation of coordinate space separately for each solute atom, extension of the force-coordinate pair basis set followed by selection of the best subset, balancing the normal equations by modified least-squares minimization of deviations, and incremental increase of outer time step in motion integration. Mean solvation forces acting on the biomolecule atoms in conformations at successive inner time steps are extrapolated using a relatively small number of best (closest) solute atomic coordinates and corresponding mean solvation forces obtained at previous outer time steps by converging the 3D-RISM-KH integral equations. The MTS-MD evolution steered with GSFE of 3D-RISM-KH mean solvation forces is efficiently stabilized with our optimized isokinetic Nosé-Hoover chain (OIN) thermostat. We validated the hybrid MTS-MD/OIN/GSFE/3D-RISM-KH integrator on solvated organic and biomolecules of different stiffness and complexity: asphaltene dimer in toluene solvent, hydrated alanine dipeptide, miniprotein 1L2Y, and protein G. The GSFE accuracy and the OIN efficiency allowed us to enlarge outer time steps up to huge values of 1-4 ps while accurately reproducing conformational properties. Quasidynamics steered with 3D-RISM-KH mean solvation forces achieves time scale compression of conformational changes coupled with solvent exchange, resulting in further significant acceleration of protein conformational sampling with respect to real time dynamics. Overall, this provided a 50- to 1000-fold effective speedup of conformational sampling for these systems, compared to conventional MD

  18. Competing risk models in reliability systems, a Weibull distribution model with Bayesian analysis approach

    International Nuclear Information System (INIS)

    Iskandar, Ismed; Gondokaryono, Yudi Satria

    2016-01-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  19. Conceptual Software Reliability Prediction Models for Nuclear Power Plant Safety Systems

    International Nuclear Information System (INIS)

    Johnson, G.; Lawrence, D.; Yu, H.

    2000-01-01

    The objective of this project is to develop a method to predict the potential reliability of software to be used in a digital system instrumentation and control system. The reliability prediction is to make use of existing measures of software reliability such as those described in IEEE Std 982 and 982.2. This prediction must be of sufficient accuracy to provide a value for uncertainty that could be used in a nuclear power plant probabilistic risk assessment (PRA). For the purposes of the project, reliability was defined to be the probability that the digital system will successfully perform its intended safety function (for the distribution of conditions under which it is expected to respond) upon demand with no unintended functions that might affect system safety. The ultimate objective is to use the identified measures to develop a method for predicting the potential quantitative reliability of a digital system. The reliability prediction models proposed in this report are conceptual in nature. That is, possible prediction techniques are proposed and trial models are built, but in order to become a useful tool for predicting reliability, the models must be tested, modified according to the results, and validated. Using methods outlined by this project, models could be constructed to develop reliability estimates for elements of software systems. This would require careful review and refinement of the models, development of model parameters from actual experience data or expert elicitation, and careful validation. By combining these reliability estimates (generated from the validated models for the constituent parts) in structural software models, the reliability of the software system could then be predicted. Modeling digital system reliability will also require that methods be developed for combining reliability estimates for hardware and software. System structural models must also be developed in order to predict system reliability based upon the reliability

  20. Solvation quantities from a COSMO-RS equation of state

    International Nuclear Information System (INIS)

    Panayiotou, C.; Tsivintzelis, I.; Aslanidou, D.; Hatzimanikatis, V.

    2015-01-01

    Highlights: • Extension of the successful COSMO-RS model to an equation-of-state model. • Two scaling constants, obtained from atom-specific contributions. • Overall estimation of the solvation quantities and contributions. - Abstract: This work focuses on the extension of the successful COSMO-RS model of mixtures into an equation-of-state model of fluids and its application for the estimation of solvation/hydration quantities of a variety of chemical substances. These quantities include free-energies, enthalpies and entropies of hydration as well as the separate contributions to each of them. Emphasis is given on the estimation of contributions from the conformational changes of solutes upon solvation and the associated restructuring of solvent in its immediate neighborhood. COSMO-RS is a quantum-mechanics based group/segment contribution model in which the Quasi-Chemical (QC) approach is used for the description of the non-random distribution of interacting segments in the system. Thus, the equation-of-state development is done through such a QC framework. The new model will not need any adjustable parameters for the strong specific interactions, such as hydrogen bonds, since they will be provided by the quantum-mechanics based cosmo-files – a key feature of COSMO-RS model. It will need, however, one volumetric and one energy parameter per fluid, which are scaling constants or molecular descriptors of the fluid and are obtained from rather easily available data such as densities, boiling points, vapor pressures, heats of vaporization or second virial coefficients. The performance and the potential of the new equation-of-state model to become a fully predictive model are critically discussed

  1. Comparative assessment of computational methods for the determination of solvation free energies in alcohol-based molecules.

    Science.gov (United States)

    Martins, Silvia A; Sousa, Sergio F

    2013-06-05

    The determination of differences in solvation free energies between related drug molecules remains an important challenge in computational drug optimization, when fast and accurate calculation of differences in binding free energy are required. In this study, we have evaluated the performance of five commonly used polarized continuum model (PCM) methodologies in the determination of solvation free energies for 53 typical alcohol and alkane small molecules. In addition, the performance of these PCM methods, of a thermodynamic integration (TI) protocol and of the Poisson-Boltzmann (PB) and generalized Born (GB) methods, were tested in the determination of solvation free energies changes for 28 common alkane-alcohol transformations, by the substitution of an hydrogen atom for a hydroxyl substituent. The results show that the solvation model D (SMD) performs better among the PCM-based approaches in estimating solvation free energies for alcohol molecules, and solvation free energy changes for alkane-alcohol transformations, with an average error below 1 kcal/mol for both quantities. However, for the determination of solvation free energy changes on alkane-alcohol transformation, PB and TI yielded better results. TI was particularly accurate in the treatment of hydroxyl groups additions to aromatic rings (0.53 kcal/mol), a common transformation when optimizing drug-binding in computer-aided drug design. Copyright © 2013 Wiley Periodicals, Inc.

  2. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G.; Balan, I. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safetly, Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)

    2008-07-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  3. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    International Nuclear Information System (INIS)

    Cacuci, D. G.; Cacuci, D. G.; Balan, I.; Ionescu-Bujor, M.

    2008-01-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  4. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    Science.gov (United States)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential

  5. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    Directory of Open Access Journals (Sweden)

    Kaijuan Yuan

    2016-01-01

    Full Text Available Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods.

  6. Absolute single-ion solvation free energy scale in methanol determined by the lithium cluster-continuum approach.

    Science.gov (United States)

    Pliego, Josefredo R; Miguel, Elizabeth L M

    2013-05-02

    Absolute solvation free energy of the lithium cation in methanol was calculated by the cluster-continuum quasichemical theory of solvation. Clusters with up to five methanol molecules were investigated using X3LYP, MP2, and MP4 methods with DZVP, 6-311+G(2df,2p), TZVPP+diff, and QZVPP+diff basis sets and including the cluster solvation through the PCM and SMD continuum models. Our calculations have determined a value of -118.1 kcal mol(-1) for the solvation free energy of the lithium, in close agreement with a value of -116.6 kcal mol(-1) consistent with the TATB assumption. Using data of solvation and transfer free energy of a pair of ions, electrode potentials and pKa, we have obtained the solvation free energy of 25 ions in methanol. Our analysis leads to a value of -253.6 kcal mol(-1) for the solvation free energy of the proton, which can be compared with the value of -263.5 kcal mol(-1) obtained by Kelly et al. using the cluster pair approximation. Considering that this difference is due to the methanol surface potential, we have estimated that it corresponds to -0.429 V.

  7. SIERRA - A 3-D device simulator for reliability modeling

    Science.gov (United States)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  8. Modeling of seismic hazards for dynamic reliability analysis

    International Nuclear Information System (INIS)

    Mizutani, M.; Fukushima, S.; Akao, Y.; Katukura, H.

    1993-01-01

    This paper investigates the appropriate indices of seismic hazard curves (SHCs) for seismic reliability analysis. In the most seismic reliability analyses of structures, the seismic hazards are defined in the form of the SHCs of peak ground accelerations (PGAs). Usually PGAs play a significant role in characterizing ground motions. However, PGA is not always a suitable index of seismic motions. When random vibration theory developed in the frequency domain is employed to obtain statistics of responses, it is more convenient for the implementation of dynamic reliability analysis (DRA) to utilize an index which can be determined in the frequency domain. In this paper, we summarize relationships among the indices which characterize ground motions. The relationships between the indices and the magnitude M are arranged as well. In this consideration, duration time plays an important role in relating two distinct class, i.e. energy class and power class. Fourier and energy spectra are involved in the energy class, and power and response spectra and PGAs are involved in the power class. These relationships are also investigated by using ground motion records. Through these investigations, we have shown the efficiency of employing the total energy as an index of SHCs, which can be determined in the time and frequency domains and has less variance than the other indices. In addition, we have proposed the procedure of DRA based on total energy. (author)

  9. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  10. Breaking the polar-nonpolar division in solvation free energy prediction.

    Science.gov (United States)

    Wang, Bao; Wang, Chengzhang; Wu, Kedi; Wei, Guo-Wei

    2018-02-05

    Implicit solvent models divide solvation free energies into polar and nonpolar additive contributions, whereas polar and nonpolar interactions are inseparable and nonadditive. We present a feature functional theory (FFT) framework to break this ad hoc division. The essential ideas of FFT are as follows: (i) representability assumption: there exists a microscopic feature vector that can uniquely characterize and distinguish one molecule from another; (ii) feature-function relationship assumption: the macroscopic features, including solvation free energy, of a molecule is a functional of microscopic feature vectors; and (iii) similarity assumption: molecules with similar microscopic features have similar macroscopic properties, such as solvation free energies. Based on these assumptions, solvation free energy prediction is carried out in the following protocol. First, we construct a molecular microscopic feature vector that is efficient in characterizing the solvation process using quantum mechanics and Poisson-Boltzmann theory. Microscopic feature vectors are combined with macroscopic features, that is, physical observable, to form extended feature vectors. Additionally, we partition a solvation dataset into queries according to molecular compositions. Moreover, for each target molecule, we adopt a machine learning algorithm for its nearest neighbor search, based on the selected microscopic feature vectors. Finally, from the extended feature vectors of obtained nearest neighbors, we construct a functional of solvation free energy, which is employed to predict the solvation free energy of the target molecule. The proposed FFT model has been extensively validated via a large dataset of 668 molecules. The leave-one-out test gives an optimal root-mean-square error (RMSE) of 1.05 kcal/mol. FFT predictions of SAMPL0, SAMPL1, SAMPL2, SAMPL3, and SAMPL4 challenge sets deliver the RMSEs of 0.61, 1.86, 1.64, 0.86, and 1.14 kcal/mol, respectively. Using a test set of 94

  11. Electrostatic solvation free energies of charged hard spheres using molecular dynamics with density functional theory interactions

    Science.gov (United States)

    Duignan, Timothy T.; Baer, Marcel D.; Schenter, Gregory K.; Mundy, Chistopher J.

    2017-10-01

    Determining the solvation free energies of single ions in water is one of the most fundamental problems in physical chemistry and yet many unresolved questions remain. In particular, the ability to decompose the solvation free energy into simple and intuitive contributions will have important implications for models of electrolyte solution. Here, we provide definitions of the various types of single ion solvation free energies based on different simulation protocols. We calculate solvation free energies of charged hard spheres using density functional theory interaction potentials with molecular dynamics simulation and isolate the effects of charge and cavitation, comparing to the Born (linear response) model. We show that using uncorrected Ewald summation leads to unphysical values for the single ion solvation free energy and that charging free energies for cations are approximately linear as a function of charge but that there is a small non-linearity for small anions. The charge hydration asymmetry for hard spheres, determined with quantum mechanics, is much larger than for the analogous real ions. This suggests that real ions, particularly anions, are significantly more complex than simple charged hard spheres, a commonly employed representation.

  12. Possibilities and Limitations of Applying Software Reliability Growth Models to Safety- Critical Software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2006-01-01

    As digital systems are gradually introduced to nuclear power plants (NPPs), the need of quantitatively analyzing the reliability of the digital systems is also increasing. Kang and Sung identified (1) software reliability, (2) common-cause failures (CCFs), and (3) fault coverage as the three most critical factors in the reliability analysis of digital systems. For the estimation of the safety-critical software (the software that is used in safety-critical digital systems), the use of Bayesian Belief Networks (BBNs) seems to be most widely used. The use of BBNs in reliability estimation of safety-critical software is basically a process of indirectly assigning a reliability based on various observed information and experts' opinions. When software testing results or software failure histories are available, we can use a process of directly estimating the reliability of the software using various software reliability growth models such as Jelinski- Moranda model and Goel-Okumoto's nonhomogeneous Poisson process (NHPP) model. Even though it is generally known that software reliability growth models cannot be applied to safety-critical software due to small number of expected failure data from the testing of safety-critical software, we try to find possibilities and corresponding limitations of applying software reliability growth models to safety critical software

  13. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  14. Using Model Replication to Improve the Reliability of Agent-Based Models

    Science.gov (United States)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  15. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  16. Time-dependent reliability analysis of nuclear reactor operators using probabilistic network models

    International Nuclear Information System (INIS)

    Oka, Y.; Miyata, K.; Kodaira, H.; Murakami, S.; Kondo, S.; Togo, Y.

    1987-01-01

    Human factors are very important for the reliability of a nuclear power plant. Human behavior has essentially a time-dependent nature. The details of thinking and decision making processes are important for detailed analysis of human reliability. They have, however, not been well considered by the conventional methods of human reliability analysis. The present paper describes the models for the time-dependent and detailed human reliability analysis. Recovery by an operator is taken into account and two-operators models are also presented

  17. Dynamic reliability modeling of three-state networks

    OpenAIRE

    Ashrafi, S.; Asadi, M.

    2014-01-01

    This paper is an investigation into the reliability and stochastic properties of three-state networks. We consider a single-step network consisting of n links and we assume that the links are subject to failure. We assume that the network can be in three states, up (K = 2), partial performance (K = 1), and down (K = 0). Using the concept of the two-dimensional signature, we study the residual lifetimes of the networks under different scenarios on the states and the number of...

  18. Solvation of decane and benzene in mixtures of 1-octanol and N, N-dimethylformamide

    Science.gov (United States)

    Kustov, A. V.; Smirnova, N. L.

    2016-09-01

    The heats of dissolution of decane and benzene in a model system of octanol-1 (OctOH) and N, N-dimethylformamide (DMF) at 308 K are measured using a variable temperature calorimeter equipped with an isothermal shell. Standard enthalpies are determined and standard heat capacities of dissolution in the temperature range of 298-318 K are calculated using data obtained in [1, 2]. The state of hydrocarbon molecules in a binary mixture is studied in terms of the enhanced coordination model (ECM). Benzene is shown to be preferentially solvated by DMF over the range of physiological temperatures. The solvation shell of decane is found to be strongly enriched with 1-octanol. It is obvious that although both hydrocarbons are nonpolar, the presence of the aromatic π-system in benzene leads to drastic differences in their solvation in a lipid-protein medium.

  19. Theory model and experiment research about the cognition reliability of nuclear power plant operators

    International Nuclear Information System (INIS)

    Fang Xiang; Zhao Bingquan

    2000-01-01

    In order to improve the reliability of NPP operation, the simulation research on the reliability of nuclear power plant operators is needed. Making use of simulator of nuclear power plant as research platform, and taking the present international reliability research model-human cognition reliability for reference, the part of the model is modified according to the actual status of Chinese nuclear power plant operators and the research model of Chinese nuclear power plant operators obtained based on two-parameter Weibull distribution. Experiments about the reliability of nuclear power plant operators are carried out using the two-parameter Weibull distribution research model. Compared with those in the world, the same results are achieved. The research would be beneficial to the operation safety of nuclear power plant

  20. Stochastic modeling for reliability shocks, burn-in and heterogeneous populations

    CERN Document Server

    Finkelstein, Maxim

    2013-01-01

    Focusing on shocks modeling, burn-in and heterogeneous populations, Stochastic Modeling for Reliability naturally combines these three topics in the unified stochastic framework and presents numerous practical examples that illustrate recent theoretical findings of the authors.  The populations of manufactured items in industry are usually heterogeneous. However, the conventional reliability analysis is performed under the implicit assumption of homogeneity, which can result in distortion of the corresponding reliability indices and various misconceptions. Stochastic Modeling for Reliability fills this gap and presents the basics and further developments of reliability theory for heterogeneous populations. Specifically, the authors consider burn-in as a method of elimination of ‘weak’ items from heterogeneous populations. The real life objects are operating in a changing environment. One of the ways to model an impact of this environment is via the external shocks occurring in accordance with some stocha...

  1. Research on cognitive reliability model for main control room considering human factors in nuclear power plants

    International Nuclear Information System (INIS)

    Jiang Jianjun; Zhang Li; Wang Yiqun; Zhang Kun; Peng Yuyuan; Zhou Cheng

    2012-01-01

    Facing the shortcomings of the traditional cognitive factors and cognitive model, this paper presents a Bayesian networks cognitive reliability model by taking the main control room as a reference background and human factors as the key points. The model mainly analyzes the cognitive reliability affected by the human factors, and for the cognitive node and influence factors corresponding to cognitive node, a series of methods and function formulas to compute the node cognitive reliability is proposed. The model and corresponding methods can be applied to the evaluation of cognitive process for the nuclear power plant operators and have a certain significance for the prevention of safety accidents in nuclear power plants. (authors)

  2. Solvation pressure as real pressure: I. Ethanol and starch under negative pressure

    CERN Document Server

    Uden, N W A V; Faux, D A; Tanczos, A C; Howlin, B; Dunstan, D J

    2003-01-01

    The reality of the solvation pressure generated by the cohesive energy density of liquids is demonstrated by three methods. Firstly, the Raman spectrum of ethanol as a function of cohesive energy density (solvation pressure) in ethanol-water and ethanol-chloroform mixtures is compared with the Raman spectrum of pure ethanol under external hydrostatic pressure and the solvation pressure and hydrostatic pressure are found to be equivalent for some transitions. Secondly, the bond lengths of ethanol are calculated by molecular dynamics modelling for liquid ethanol under pressure and for ethanol vapour. The difference in bond lengths between vapour and liquid are found to be equivalent to the solvation pressure for the C-H sub 3 , C-H sub 2 and O-H bond lengths, with discrepancies for the C-C and C-O bond lengths. Thirdly, the pressure-induced gelation of potato starch is measured in pure water and in mixtures of water and ethanol. The phase transition pressure varies in accordance with the change in solvation pre...

  3. Model case IRS-RWE for the determination of reliability data in practical operation

    Energy Technology Data Exchange (ETDEWEB)

    Hoemke, P; Krause, H

    1975-11-01

    Reliability und availability analyses are carried out to assess the safety of nuclear power plants. The paper deals in the first part with the requirement of accuracy for the input data of such analyses and in the second part with the prototype data collection of reliability data 'Model case IRS-RWE'. The objectives and the structure of the data collection are described. The present results show that the estimation of reliability data in power plants is possible and gives reasonable results.

  4. Estimating the Parameters of Software Reliability Growth Models Using the Grey Wolf Optimization Algorithm

    OpenAIRE

    Alaa F. Sheta; Amal Abdel-Raouf

    2016-01-01

    In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of...

  5. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  6. Reliability-cost models for the power switching devices of wind power converters

    DEFF Research Database (Denmark)

    Ma, Ke; Blaabjerg, Frede

    2012-01-01

    In order to satisfy the growing reliability requirements for the wind power converters with more cost-effective solution, the target of this paper is to establish a new reliability-cost model which can connect the relationship between reliability performances and corresponding semiconductor cost...... temperature mean value Tm and fluctuation amplitude ΔTj of power devices, are presented. With the proposed reliability-cost model, it is possible to enable future reliability-oriented design of the power switching devices for wind power converters, and also an evaluation benchmark for different wind power...... for power switching devices. First the conduction loss, switching loss as well as thermal impedance models of power switching devices (IGBT module) are related to the semiconductor chip number information respectively. Afterwards simplified analytical solutions, which can directly extract the junction...

  7. Partial solvation parameters and LSER molecular descriptors

    International Nuclear Information System (INIS)

    Panayiotou, Costas

    2012-01-01

    Graphical abstract: The one-to-one correspondence of LSER molecular descriptors and partial solvation parameters (PSPs) for propionic acid. Highlights: ► Quantum-mechanics based development of a new QSPR predictive method. ► One-to-one correspondence of partial solvation parameters and LSER molecular descriptors. ► Development of alternative routes for the determination of partial solvation parameters and solubility parameters. ► Expansion and enhancement of solubility parameter approach. - Abstract: The partial solvation parameters (PSP) have been defined recently, on the basis of the insight derived from modern quantum chemical calculations, in an effort to overcome some of the inherent restrictions of the original definition of solubility parameter and expand its range of applications. The present work continues along these lines and introduces two new solvation parameters, the van der Waals and the polarity/refractivity ones, which may replace both of the former dispersion and polar PSPs. Thus, one may use either the former scheme of PSPs (dispersion, polar, acidic, and basic) or, equivalently, the new scheme (van der Waals, polarity/refractivity, acidic, basic). The new definitions are made in a simple and straightforward manner and, thus, the strength and appeal of the widely accepted concept of solubility parameter is preserved. The inter-relations of the various PSPs are critically discussed and their values are tabulated for a variety of common substances. The advantage of the new scheme of PSPs is the bridge that makes with the corresponding Abraham’s LSER descriptors. With this bridge, one may exchange information between PSPs, LSER experimental scales, and quantum mechanics calculations such as via the COSMO-RS theory. The proposed scheme is a predictive one and it is applicable to, both, homo-solvated and hetero-solvated compounds. The new scheme is tested for the calculation of activity coefficients at infinite dilution, for octanol

  8. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  9. Reliability Based Optimal Design of Vertical Breakwaters Modelled as a Series System Failure

    DEFF Research Database (Denmark)

    Christiani, E.; Burcharth, H. F.; Sørensen, John Dalsgaard

    1996-01-01

    Reliability based design of monolithic vertical breakwaters is considered. Probabilistic models of important failure modes such as sliding and rupture failure in the rubble mound and the subsoil are described. Characterisation of the relevant stochastic parameters are presented, and relevant design...... variables are identified and an optimal system reliability formulation is presented. An illustrative example is given....

  10. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  11. Reliability Models Applied to a System of Power Converters in Particle Accelerators

    OpenAIRE

    Siemaszko, D; Speiser, M; Pittet, S

    2012-01-01

    Several reliability models are studied when applied to a power system containing a large number of power converters. A methodology is proposed and illustrated in the case study of a novel linear particle accelerator designed for reaching high energies. The proposed methods result in the prediction of both reliability and availability of the considered system for optimisation purposes.

  12. Analysis of Statistical Distributions Used for Modeling Reliability and Failure Rate of Temperature Alarm Circuit

    International Nuclear Information System (INIS)

    EI-Shanshoury, G.I.

    2011-01-01

    Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate

  13. Corrosion Thermodynamics of Magnesium and Alloys from First Principles as a Function of Solvation

    Science.gov (United States)

    Limmer, Krista; Williams, Kristen; Andzelm, Jan

    Thermodynamics of corrosion processes occurring on magnesium surfaces, such as hydrogen evolution and water dissociation, have been examined with density functional theory (DFT) to evaluate the effect of impurities and dilute alloying additions. The modeling of corrosion thermodynamics requires examination of species in a variety of chemical and electronic states in order to accurately represent the complex electrochemical corrosion process. In this study, DFT calculations for magnesium corrosion thermodynamics were performed with two DFT codes (VASP and DMol3), with multiple exchange-correlation functionals for chemical accuracy, as well as with various levels of implicit and explicit solvation for surfaces and solvated ions. The accuracy of the first principles calculations has been validated against Pourbaix diagrams constructed from solid, gas and solvated charged ion calculations. For aqueous corrosion, it is shown that a well parameterized implicit solvent is capable of accurately representing all but the first coordinating layer of explicit water for charged ions.

  14. Reliability Modeling of Electromechanical System with Meta-Action Chain Methodology

    Directory of Open Access Journals (Sweden)

    Genbao Zhang

    2018-01-01

    Full Text Available To establish a more flexible and accurate reliability model, the reliability modeling and solving algorithm based on the meta-action chain thought are used in this thesis. Instead of estimating the reliability of the whole system only in the standard operating mode, this dissertation adopts the structure chain and the operating action chain for the system reliability modeling. The failure information and structure information for each component are integrated into the model to overcome the given factors applied in the traditional modeling. In the industrial application, there may be different operating modes for a multicomponent system. The meta-action chain methodology can estimate the system reliability under different operating modes by modeling the components with varieties of failure sensitivities. This approach has been identified by computing some electromechanical system cases. The results indicate that the process could improve the system reliability estimation. It is an effective tool to solve the reliability estimation problem in the system under various operating modes.

  15. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  16. Fatigue reliability and effective turbulence models in wind farms

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Frandsen, Sten Tronæs; Tarp-Johansen, N.J.

    2007-01-01

    behind wind turbines can imply a significant reduction in the fatigue lifetime of wind turbines placed in wakes. In this paper the design code model in the wind turbine code IEC 61400-1 (2005) is evaluated from a probabilistic point of view, including the importance of modeling the SN-curve by linear...

  17. Powering stochastic reliability models by discrete event simulation

    DEFF Research Database (Denmark)

    Kozine, Igor; Wang, Xiaoyun

    2012-01-01

    it difficult to find a solution to the problem. The power of modern computers and recent developments in discrete-event simulation (DES) software enable to diminish some of the drawbacks of stochastic models. In this paper we describe the insights we have gained based on using both Markov and DES models...

  18. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  19. Wind Farm Reliability Modelling Using Bayesian Networks and Semi-Markov Processes

    Directory of Open Access Journals (Sweden)

    Robert Adam Sobolewski

    2015-09-01

    Full Text Available Technical reliability plays an important role among factors affecting the power output of a wind farm. The reliability is determined by an internal collection grid topology and reliability of its electrical components, e.g. generators, transformers, cables, switch breakers, protective relays, and busbars. A wind farm reliability’s quantitative measure can be the probability distribution of combinations of operating and failed states of the farm’s wind turbines. The operating state of a wind turbine is its ability to generate power and to transfer it to an external power grid, which means the availability of the wind turbine and other equipment necessary for the power transfer to the external grid. This measure can be used for quantitative analysis of the impact of various wind farm topologies and the reliability of individual farm components on the farm reliability, and for determining the expected farm output power with consideration of the reliability. This knowledge may be useful in an analysis of power generation reliability in power systems. The paper presents probabilistic models that quantify the wind farm reliability taking into account the above-mentioned technical factors. To formulate the reliability models Bayesian networks and semi-Markov processes were used. Using Bayesian networks the wind farm structural reliability was mapped, as well as quantitative characteristics describing equipment reliability. To determine the characteristics semi-Markov processes were used. The paper presents an example calculation of: (i probability distribution of the combination of both operating and failed states of four wind turbines included in the wind farm, and (ii expected wind farm output power with consideration of its reliability.

  20. Cognitive modelling: a basic complement of human reliability analysis

    International Nuclear Information System (INIS)

    Bersini, U.; Cacciabue, P.C.; Mancini, G.

    1988-01-01

    In this paper the issues identified in modelling humans and machines are discussed in the perspective of the consideration of human errors managing complex plants during incidental as well as normal conditions. The dichotomy between the use of a cognitive versus a behaviouristic model approach is discussed and the complementarity aspects rather than the differences of the two methods are identified. A cognitive model based on a hierarchical goal-oriented approach and driven by fuzzy logic methodology is presented as the counterpart to the 'classical' THERP methodology for studying human errors. Such a cognitive model is discussed at length and its fundamental components, i.e. the High Level Decision Making and the Low Level Decision Making models, are reviewed. Finally, the inadequacy of the 'classical' THERP methodology to deal with cognitive errors is discussed on the basis of a simple test case. For the same case the cognitive model is then applied showing the flexibility and adequacy of the model to dynamic configuration with time-dependent failures of components and with consequent need for changing of strategy during the transient itself. (author)

  1. Construction of a reliable model pyranometer for irradiance ...

    African Journals Online (AJOL)

    USER

    2010-03-22

    Mar 22, 2010 ... hour, latitude and cloud cover are the most widely or commonly used ... models in the Nigerian environment include that of Burari and Sambo .... influence the stability of the assembly (reducing its phase ... earth's surface.

  2. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  3. Preferential solvation and solvation shell composition of free base and protonated 5, 10, 15, 20-tetrakis(4-sulfonatophenyl)porphyrin in aqueous organic mixed solvents

    Science.gov (United States)

    Farajtabar, Ali; Jaberi, Fatemeh; Gharib, Farrokh

    2011-12-01

    The solvatochromic properties of the free base and the protonated 5, 10, 15, 20-tetrakis(4-sulfonatophenyl)porphyrin (TPPS) were studied in pure water, methanol, ethanol (protic solvents), dimethylsulfoxide, DMSO, (non-protic solvent), and their corresponding aqueous-organic binary mixed solvents. The correlation of the empirical solvent polarity scale ( ET) values of TPPS with composition of the solvents was analyzed by the solvent exchange model of Bosch and Roses to clarify the preferential solvation of the probe dyes in the binary mixed solvents. The solvation shell composition and the synergistic effects in preferential solvation of the solute dyes were investigated in terms of both solvent-solvent and solute-solvent interactions and also, the local mole fraction of each solvent composition was calculated in cybotactic region of the probe. The effective mole fraction variation may provide significant physico-chemical insights in the microscopic and molecular level of interactions between TPPS species and the solvent components and therefore, can be used to interpret the solvent effect on kinetics and thermodynamics of TPPS. The obtained results from the preferential solvation and solvent-solvent interactions have been successfully applied to explain the variation of equilibrium behavior of protonation of TPPS occurring in aqueous organic mixed solvents of methanol, ethanol and DMSO.

  4. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  5. Charge transport models for reliability engineering of semiconductor devices

    International Nuclear Information System (INIS)

    Bina, M.

    2014-01-01

    The simulation of semiconductor devices is important for the assessment of device lifetimes before production. In this context, this work investigates the influence of the charge carrier transport model on the accuracy of bias temperature instability and hot-carrier degradation models in MOS devices. For this purpose, a four-state defect model based on a non-radiative multi phonon (NMP) theory is implemented to study the bias temperature instability. However, the doping concentrations typically used in nano-scale devices correspond to only a small number of dopants in the channel, leading to fluctuations of the electrostatic potential. Thus, the granularity of the doping cannot be ignored in these devices. To study the bias temperature instability in the presence of fluctuations of the electrostatic potential, the advanced drift diffusion device simulator Minimos-NT is employed. In a first effort to understand the bias temperature instability in p-channel MOSFETs at elevated temperatures, data from direct-current-current-voltage measurements is successfully reproduced using a four-state defect model. Differences between the four-state defect model and the commonly employed trapping model from Shockley, Read and Hall (SRH) have been investigated showing that the SRH model is incapable of reproducing the measurement data. This is in good agreement with the literature, where it has been extensively shown that a model based on SRH theory cannot reproduce the characteristic time constants found in BTI recovery traces. Upon inspection of recorded recovery traces after bias temperature stress in n-channel MOSFETs it is found that the gate current is strongly correlated with the drain current (recovery trace). Using a random discrete dopant model and non-equilibrium greens functions it is shown that direct tunnelling cannot explain the magnitude of the gate current reduction. Instead it is found that trap-assisted tunnelling, modelled using NMP theory, is the cause of this

  6. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  7. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    Science.gov (United States)

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  8. Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2013-01-01

    configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...... was applied for the systems failure functions estimation. It is desired to compare the results with the true system failure function, which is possible to estimate using simulation techniques. Theoretical model development should be applied for the further research. One of the directions for it might...... be modeling the system based on the Sequential Order Statistics, by considering the failure of the minimum (weakest component) at each loading level. The proposed idea to represent the system by the independent components could also be used for modeling reliability by Sequential Order Statistics....

  9. Reliability Modeling Development and Its Applications for Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    This presentation includes a summary of NEPP-funded deliverables for the Base-Metal Electrodes (BMEs) capacitor task, development of a general reliability model for BME capacitors, and a summary and future work.

  10. Microstructural Modeling of Brittle Materials for Enhanced Performance and Reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Teague, Melissa Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Teague, Melissa Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rodgers, Theron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rodgers, Theron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grutzik, Scott Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grutzik, Scott Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meserole, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meserole, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    Brittle failure is often influenced by difficult to measure and variable microstructure-scale stresses. Recent advances in photoluminescence spectroscopy (PLS), including improved confocal laser measurement and rapid spectroscopic data collection have established the potential to map stresses with microscale spatial resolution (%3C2 microns). Advanced PLS was successfully used to investigate both residual and externally applied stresses in polycrystalline alumina at the microstructure scale. The measured average stresses matched those estimated from beam theory to within one standard deviation, validating the technique. Modeling the residual stresses within the microstructure produced general agreement in comparison with the experimentally measured results. Microstructure scale modeling is primed to take advantage of advanced PLS to enable its refinement and validation, eventually enabling microstructure modeling to become a predictive tool for brittle materials.

  11. Modeling human intention formation for human reliability assessment

    International Nuclear Information System (INIS)

    Woods, D.D.; Roth, E.M.; Pople, H. Jr.

    1988-01-01

    This paper describes a dynamic simulation capability for modeling how people form intentions to act in nuclear power plant emergency situations. This modeling tool, Cognitive Environment Simulation or CES, was developed based on techniques from artificial intelligence. It simulates the cognitive processes that determine situation assessment and intention formation. It can be used to investigate analytically what situations and factors lead to intention failures, what actions follow from intention failures (e.g. errors of omission, errors of commission, common mode errors), the ability to recover from errors or additional machine failures, and the effects of changes in the NPP person machine system. One application of the CES modeling environment is to enhance the measurement of the human contribution to risk in probabilistic risk assessment studies. (author)

  12. Modelling Reliability of Supply and Infrastructural Dependency in Energy Distribution Systems

    OpenAIRE

    Helseth, Arild

    2008-01-01

    This thesis presents methods and models for assessing reliability of supply and infrastructural dependency in energy distribution systems with multiple energy carriers. The three energy carriers of electric power, natural gas and district heating are considered. Models and methods for assessing reliability of supply in electric power systems are well documented, frequently applied in the industry and continuously being subject to research and improvement. On the contrary, there are compar...

  13. An analytical model for computation of reliability of waste management facilities with intermediate storages

    International Nuclear Information System (INIS)

    Kallweit, A.; Schumacher, F.

    1977-01-01

    A high reliability is called for waste management facilities within the fuel cycle of nuclear power stations which can be fulfilled by providing intermediate storage facilities and reserve capacities. In this report a model based on the theory of Markov processes is described which allows computation of reliability characteristics of waste management facilities containing intermediate storage facilities. The application of the model is demonstrated by an example. (orig.) [de

  14. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    CERN Document Server

    Nikulin, M; Mesbah, M; Limnios, N

    2004-01-01

    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  15. Appraisal and Reliability of Variable Engagement Model Prediction ...

    African Journals Online (AJOL)

    The variable engagement model based on the stress - crack opening displacement relationship and, which describes the behaviour of randomly oriented steel fibres composite subjected to uniaxial tension has been evaluated so as to determine the safety indices associated when the fibres are subjected to pullout and with ...

  16. Multi-state reliability for coolant pump based on dependent competitive failure model

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Zhao Xinwen; Chen Ling

    2013-01-01

    By taking into account the effect of degradation due to internal vibration and external shocks. and based on service environment and degradation mechanism of nuclear power plant coolant pump, a multi-state reliability model of coolant pump was proposed for the system that involves competitive failure process between shocks and degradation. Using this model, degradation state probability and system reliability were obtained under the consideration of internal vibration and external shocks for the degraded coolant pump. It provided an effective method to reliability analysis for coolant pump in nuclear power plant based on operating environment. The results can provide a decision making basis for design changing and maintenance optimization. (authors)

  17. Reliability Evaluation for the Surface to Air Missile Weapon Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Deng Jianjun

    2015-01-01

    Full Text Available The fuzziness and randomness is integrated by using digital characteristics, such as Expected value, Entropy and Hyper entropy. The cloud model adapted to reliability evaluation is put forward based on the concept of the surface to air missile weapon. The cloud scale of the qualitative evaluation is constructed, and the quantitative variable and the qualitative variable in the system reliability evaluation are corresponded. The practical calculation result shows that it is more effective to analyze the reliability of the surface to air missile weapon by this way. The practical calculation result also reflects the model expressed by cloud theory is more consistent with the human thinking style of uncertainty.

  18. Modeling reliability measurement of interface on information system: Towards the forensic of rules

    Science.gov (United States)

    Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan

    2018-02-01

    Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.

  19. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  20. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    Science.gov (United States)

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  1. Quantification of Wave Model Uncertainties Used for Probabilistic Reliability Assessments of Wave Energy Converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2015-01-01

    Wave models used for site assessments are subjected to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Four different wave models are considered, and validation...... data are collected from published scientific research. The bias and the root-mean-square error, as well as the scatter index, are considered for the significant wave height as well as the mean zero-crossing wave period. Based on an illustrative generic example, this paper presents how the quantified...... uncertainties can be implemented in probabilistic reliability assessments....

  2. Determination of Wave Model Uncertainties used for Probabilistic Reliability Assessments of Wave Energy Devices

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2014-01-01

    Wave models used for site assessments are subject to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Considered are four different wave models and validation...... data is collected from published scientific research. The bias, the root-mean-square error as well as the scatter index are considered for the significant wave height as well as the mean zero-crossing wave period. Based on an illustrative generic example it is shown how the estimated uncertainties can...... be implemented in probabilistic reliability assessments....

  3. On new cautious structural reliability models in the framework of imprecise probabilities

    DEFF Research Database (Denmark)

    Utkin, Lev; Kozine, Igor

    2010-01-01

    measures when the number of events of interest or observations is very small. The main feature of the models is that prior ignorance is not modelled by a fixed single prior distribution, but by a class of priors which is defined by upper and lower probabilities that can converge as statistical data......New imprecise structural reliability models are described in this paper. They are developed based on the imprecise Bayesian inference and are imprecise Dirichlet, imprecise negative binomial, gamma-exponential and normal models. The models are applied to computing cautious structural reliability...

  4. A Structural Reliability Business Process Modelling with System Dynamics Simulation

    OpenAIRE

    Lam, C. Y.; Chan, S. L.; Ip, W. H.

    2010-01-01

    Business activity flow analysis enables organizations to manage structured business processes, and can thus help them to improve performance. The six types of business activities identified here (i.e., SOA, SEA, MEA, SPA, MSA and FIA) are correlated and interact with one another, and the decisions from any business activity form feedback loops with previous and succeeding activities, thus allowing the business process to be modelled and simulated. For instance, for any company that is eager t...

  5. Evidence for Reduced Hydrogen-Bond Cooperativity in Ionic Solvation Shells from Isotope-Dependent Dielectric Relaxation

    Science.gov (United States)

    Cota, Roberto; Ottosson, Niklas; Bakker, Huib J.; Woutersen, Sander

    2018-05-01

    We find that the reduction in dielectric response (depolarization) of water caused by solvated ions is different for H2O and D2O . This isotope dependence allows us to reliably determine the kinetic contribution to the depolarization, which is found to be significantly smaller than predicted by existing theory. The discrepancy can be explained from a reduced hydrogen-bond cooperativity in the solvation shell: we obtain quantitative agreement between theory and experiment by reducing the Kirkwood correlation factor of the solvating water from 2.7 (the bulk value) to ˜1.6 for NaCl and ˜1 (corresponding to completely uncorrelated motion of water molecules) for CsCl.

  6. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  7. Tracking reliability for space cabin-borne equipment in development by Crow model.

    Science.gov (United States)

    Chen, J D; Jiao, S J; Sun, H L

    2001-12-01

    Objective. To study and track the reliability growth of manned spaceflight cabin-borne equipment in the course of its development. Method. A new technique of reliability growth estimation and prediction, which is composed of the Crow model and test data conversion (TDC) method was used. Result. The estimation and prediction value of the reliability growth conformed to its expectations. Conclusion. The method could dynamically estimate and predict the reliability of the equipment by making full use of various test information in the course of its development. It offered not only a possibility of tracking the equipment reliability growth, but also the reference for quality control in manned spaceflight cabin-borne equipment design and development process.

  8. Modeling Manufacturing Impacts on Aging and Reliability of Polyurethane Foams

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R.; Roberts, Christine Cardinal; Mondy, Lisa Ann; Soehnel, Melissa Marie; Johnson, Kyle; Lorenzo, Henry T.

    2016-10-01

    Polyurethane is a complex multiphase material that evolves from a viscous liquid to a system of percolating bubbles, which are created via a CO2 generating reaction. The continuous phase polymerizes to a solid during the foaming process generating heat. Foams introduced into a mold increase their volume up to tenfold, and the dynamics of the expansion process may lead to voids and will produce gradients in density and degree of polymerization. These inhomogeneities can lead to structural stability issues upon aging. For instance, structural components in weapon systems have been shown to change shape as they age depending on their molding history, which can threaten critical tolerances. The purpose of this project is to develop a Cradle-to-Grave multiphysics model, which allows us to predict the material properties of foam from its birth through aging in the stockpile, where its dimensional stability is important.

  9. Relaxation dynamics following transition of solvated electrons

    International Nuclear Information System (INIS)

    Barnett, R.B.; Landman, U.; Nitzan, A.

    1989-01-01

    Relaxation dynamics following an electronic transition of an excess solvated electron in clusters and in bulk water is studied using an adiabatic simulation method. In this method the solvent evolves classically and the electron is constrained to a specified state. The coupling between the solvent and the excess electron is evaluated via the quantum expectation value of the electron--water molecule interaction potential. The relaxation following excitation (or deexcitation) is characterized by two time scales: (i) a very fast (/similar to/20--30 fs) one associated with molecular rotations in the first solvation shell about the electron, and (ii) a slower stage (/similar to/200 fs), which is of the order of the longitudinal dielectric relaxation time. The fast relaxation stage exhibits an isotope effect. The spectroscopical consequences of the relaxation dynamics are discussed

  10. Assessing Reliability of Cellulose Hydrolysis Models to Support Biofuel Process Design – Identifiability and Uncertainty Analysis

    DEFF Research Database (Denmark)

    Sin, Gürkan; Meyer, Anne S.; Gernaey, Krist

    2010-01-01

    The reliability of cellulose hydrolysis models is studied using the NREL model. An identifiability analysis revealed that only 6 out of 26 parameters are identifiable from the available data (typical hydrolysis experiments). Attempting to identify a higher number of parameters (as done in the ori......The reliability of cellulose hydrolysis models is studied using the NREL model. An identifiability analysis revealed that only 6 out of 26 parameters are identifiable from the available data (typical hydrolysis experiments). Attempting to identify a higher number of parameters (as done...

  11. CrystalExplorer model energies and energy frameworks: extension to metal coordination compounds, organic salts, solvates and open-shell systems

    Directory of Open Access Journals (Sweden)

    Campbell F. Mackenzie

    2017-09-01

    Full Text Available The application domain of accurate and efficient CE-B3LYP and CE-HF model energies for intermolecular interactions in molecular crystals is extended by calibration against density functional results for 1794 molecule/ion pairs extracted from 171 crystal structures. The mean absolute deviation of CE-B3LYP model energies from DFT values is a modest 2.4 kJ mol−1 for pairwise energies that span a range of 3.75 MJ mol−1. The new sets of scale factors determined by fitting to counterpoise-corrected DFT calculations result in minimal changes from previous energy values. Coupled with the use of separate polarizabilities for interactions involving monatomic ions, these model energies can now be applied with confidence to a vast number of molecular crystals. Energy frameworks have been enhanced to represent the destabilizing interactions that are important for molecules with large dipole moments and organic salts. Applications to a variety of molecular crystals are presented in detail to highlight the utility and promise of these tools.

  12. Recent results on solvation dynamics of electron and spur reactions of solvated electron in polar solvents studied by femtosecond laser spectroscopy and picosecond pulse radiolysis

    International Nuclear Information System (INIS)

    Mostafavi, M.

    2006-01-01

    Here, we report several studies done recently at ELYSE laboratory on the solvation dynamics of electron and on the kinetics of solvated electron in the spur reactions, performed by femtosecond laser spectroscopy and picosecond pulse radiolysis, respectively. Solvated electrons have been produced in polyol (1,2-Etanediol, 1,2-Propanediol and 1,3-Propanediol) by two-photon ionization of the solvent with 263 nm femtosecond laser pulses at room temperature. The two-photon absorption coefficient of these solvents at 263 nm has been determined. The dynamics of electron solvation in polyols has been studied by pump-probe transient absorption spectroscopy. So, time resolved absorption spectra ranging from 430 to 720 nm have been measured (Figure 1). A blue shift of the spectra is observed for the first tens of picoseconds. Using Bayesian data analysis method, the observed solvation dynamics are reconstructed with different models: stepwise mechanisms, continuous relaxation models or combinations of stepwise and continuous relaxation. That analysis clearly indicates that it is not obvious to select a unique model to describe the solvation dynamics of electron in diols. We showed that several models are able to reproduce correctly the data: a two-step model, a heterogeneous or bi-exponential continuous relaxation model and even a hybrid model with a stepwise transition and homogeneous continuous relaxation. Nevertheless, the best fits are given by the continuous spectral relaxation models. The fact that the time-evolution of the absorption spectrum of the solvated electron in diols can be accurately described by the temperature dependent absorption spectrum of the ground state solvated electron suggests that the spectral blue shift is mostly caused by the continuous relaxation of the electron trapped in a large distribution of solvent cages. Similar trends on electron solvation dynamics are observed in the cases of 1,2-ethanediol, 1,3-propanediol and 1,2 propanediol

  13. Preferential Solvation of an Asymmetric Redox Molecule

    Energy Technology Data Exchange (ETDEWEB)

    Han, Kee Sung; Rajput, Nav Nidhi; Vijayakumar, M.; Wei, Xiaoliang; Wang, Wei; Hu, Jian Z.; Persson, Kristin A.; Mueller, Karl T.

    2016-12-15

    The fundamental correlations between inter-molecular interactions, solvation structure and functionality of electrolytes are in many cases unknown, particularly for multi-component liquid systems. In this work, we explore such correlations by investigating the complex interplay between solubility and solvation structure for the electrolyte system comprising N-(ferrocenylmethyl)-N,N-dimethyl-N-ethylammonium bistrifluoromethylsulfonimide (Fc1N112-TFSI) dissolved in a ternary carbonate solvent mixture using combined NMR relaxation and computational analyses. Probing the evolution of the solvent-solvent, ion-solvent and ion-ion interactions with an increase in solute concentration provides a molecular level understanding of the solubility limit of the Fc1N112-TFSI system. An increase in solute con-centration leads to pronounced Fc1N112-TFSI contact-ion pair formation by diminishing solvent-solvent and ion-solvent type interactions. At the solubility limit, the precipitation of solute is initiated through agglomeration of contact-ion pairs due to overlapping solvation shells.

  14. Maintenance personnel performance simulation (MAPPS): a model for predicting maintenance performance reliability in nuclear power plants

    International Nuclear Information System (INIS)

    Knee, H.E.; Krois, P.A.; Haas, P.M.; Siegel, A.I.; Ryan, T.G.

    1983-01-01

    The NRC has developed a structured, quantitative, predictive methodology in the form of a computerized simulation model for assessing maintainer task performance. Objective of the overall program is to develop, validate, and disseminate a practical, useful, and acceptable methodology for the quantitative assessment of NPP maintenance personnel reliability. The program was organized into four phases: (1) scoping study, (2) model development, (3) model evaluation, and (4) model dissemination. The program is currently nearing completion of Phase 2 - Model Development

  15. On reliability and maintenance modelling of ageing equipment in electric power systems

    International Nuclear Information System (INIS)

    Lindquist, Tommie

    2008-04-01

    Maintenance optimisation is essential to achieve cost-efficiency, availability and reliability of supply in electric power systems. The process of maintenance optimisation requires information about the costs of preventive and corrective maintenance, as well as the costs of failures borne by both electricity suppliers and customers. To calculate expected costs, information is needed about equipment reliability characteristics and the way in which maintenance affects equipment reliability. The aim of this Ph.D. work has been to develop equipment reliability models taking the effect of maintenance into account. The research has focussed on the interrelated areas of condition estimation, reliability modelling and maintenance modelling, which have been investigated in a number of case studies. In the area of condition estimation two methods to quantitatively estimate the condition of disconnector contacts have been developed, which utilise results from infrared thermography inspections and contact resistance measurements. The accuracy of these methods were investigated in two case studies. Reliability models have been developed and implemented for SF6 circuit-breakers, disconnector contacts and XLPE cables in three separate case studies. These models were formulated using both empirical and physical modelling approaches. To improve confidence in such models a Bayesian statistical method incorporating information from the equipment design process was also developed. This method was illustrated in a case study of SF6 circuit-breaker operating rods. Methods for quantifying the effect of maintenance on equipment condition and reliability have been investigated in case studies on disconnector contacts and SF6 circuit-breakers. The input required by these methods are condition measurements and historical failure and maintenance data, respectively. This research has demonstrated that the effect of maintenance on power system equipment may be quantified using available data

  16. Damage Model for Reliability Assessment of Solder Joints in Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    environmental factors. Reliability assessment for such type of products conventionally is performed by classical reliability techniques based on test data. Usually conventional reliability approaches are time and resource consuming activities. Thus in this paper we choose a physics of failure approach to define...... damage model by Miner’s rule. Our attention is focused on crack propagation in solder joints of electrical components due to the temperature loadings. Based on the proposed method it is described how to find the damage level for a given temperature loading profile. The proposed method is discussed...

  17. The model case IRS-RWE for the determination of reliability data in practical operation

    International Nuclear Information System (INIS)

    Hoemke, P.; Krause, H.

    1975-11-01

    Reliability und availability analyses are carried out to assess the safety of nuclear power plants. This paper deals in the first part with the requirement of accuracy for the input data of such analyses and in the second part with the prototype data collection of reliability data 'Model case IRS-RWE'. The objectives and the structure of the data collection will be described. The present results show that the estimation of reliability data in power plants is possible and gives reasonable results. (orig.) [de

  18. Investigation of reliability indicators of information analysis systems based on Markov’s absorbing chain model

    Science.gov (United States)

    Gilmanshin, I. R.; Kirpichnikov, A. P.

    2017-09-01

    In the result of study of the algorithm of the functioning of the early detection module of excessive losses, it is proven the ability to model it by using absorbing Markov chains. The particular interest is in the study of probability characteristics of early detection module functioning algorithm of losses in order to identify the relationship of indicators of reliability of individual elements, or the probability of occurrence of certain events and the likelihood of transmission of reliable information. The identified relations during the analysis allow to set thresholds reliability characteristics of the system components.

  19. Maintenance overtime policies in reliability theory models with random working cycles

    CERN Document Server

    Nakagawa, Toshio

    2015-01-01

    This book introduces a new concept of replacement in maintenance and reliability theory. Replacement overtime, where replacement occurs at the first completion of a working cycle over a planned time, is a new research topic in maintenance theory and also serves to provide a fresh optimization technique in reliability engineering. In comparing replacement overtime with standard and random replacement techniques theoretically and numerically, 'Maintenance Overtime Policies in Reliability Theory' highlights the key benefits to be gained by adopting this new approach and shows how they can be applied to inspection policies, parallel systems and cumulative damage models. Utilizing the latest research in replacement overtime by internationally recognized experts, readers are introduced to new topics and methods, and learn how to practically apply this knowledge to actual reliability models. This book will serve as an essential guide to a new subject of study for graduate students and researchers and also provides a...

  20. Reliable software systems via chains of object models with provably correct behavior

    International Nuclear Information System (INIS)

    Yakhnis, A.; Yakhnis, V.

    1996-01-01

    This work addresses specification and design of reliable safety-critical systems, such as nuclear reactor control systems. Reliability concerns are addressed in complimentary fashion by different fields. Reliability engineers build software reliability models, etc. Safety engineers focus on prevention of potential harmful effects of systems on environment. Software/hardware correctness engineers focus on production of reliable systems on the basis of mathematical proofs. The authors think that correctness may be a crucial guiding issue in the development of reliable safety-critical systems. However, purely formal approaches are not adequate for the task, because they neglect the connection with the informal customer requirements. They alleviate that as follows. First, on the basis of the requirements, they build a model of the system interactions with the environment, where the system is viewed as a black box. They will provide foundations for automated tools which will (a) demonstrate to the customer that all of the scenarios of system behavior are presented in the model, (b) uncover scenarios not present in the requirements, and (c) uncover inconsistent scenarios. The developers will work with the customer until the black box model will not possess scenarios (b) and (c) above. Second, the authors will build a chain of several increasingly detailed models, where the first model is the black box model and the last model serves to automatically generated proved executable code. The behavior of each model will be proved to conform to the behavior of the previous one. They build each model as a cluster of interactive concurrent objects, thus they allow both top-down and bottom-up development

  1. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    International Nuclear Information System (INIS)

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.

    2014-01-01

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading

  2. Conductometric determination of solvation numbers of alkali metal cations

    International Nuclear Information System (INIS)

    Fialkov, Yu.Ya.; Gorbachev, V.Yu.; Chumak, V.L.

    1997-01-01

    Theories describing the interrelation of ion mobility with their effective radii in solutions are considered. Possibility of using these theories for determination the solvation numbers n s of some ions is estimated. According to conductometric data values of n s are calculated for alkali metal ions in propylene carbonate. The data obtained are compared with solvation numbers determined with the use of entropies of ions solvation. Change of n s values within temperature range 273.15-323.15 K is considered. Using literature data the effect of crystallographic radii of cations and medium permittivity on the the values of solvation numbers of cations are analyzed. (author)

  3. On modeling human reliability in space flights - Redundancy and recovery operations

    Science.gov (United States)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  4. Predicting Flow Breakdown Probability and Duration in Stochastic Network Models: Impact on Travel Time Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Jing [ORNL; Mahmassani, Hani S. [Northwestern University, Evanston

    2011-01-01

    This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.

  5. Structural reliability analysis under evidence theory using the active learning kriging model

    Science.gov (United States)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  6. Stochastic models and reliability parameter estimation applicable to nuclear power plant safety

    International Nuclear Information System (INIS)

    Mitra, S.P.

    1979-01-01

    A set of stochastic models and related estimation schemes for reliability parameters are developed. The models are applicable for evaluating reliability of nuclear power plant systems. Reliability information is extracted from model parameters which are estimated from the type and nature of failure data that is generally available or could be compiled in nuclear power plants. Principally, two aspects of nuclear power plant reliability have been investigated: (1) The statistical treatment of inplant component and system failure data; (2) The analysis and evaluation of common mode failures. The model inputs are failure data which have been classified as either the time type of failure data or the demand type of failure data. Failures of components and systems in nuclear power plant are, in general, rare events.This gives rise to sparse failure data. Estimation schemes for treating sparse data, whenever necessary, have been considered. The following five problems have been studied: 1) Distribution of sparse failure rate component data. 2) Failure rate inference and reliability prediction from time type of failure data. 3) Analyses of demand type of failure data. 4) Common mode failure model applicable to time type of failure data. 5) Estimation of common mode failures from 'near-miss' demand type of failure data

  7. Designing the database for a reliability aware Model-Based System Engineering process

    International Nuclear Information System (INIS)

    Cressent, Robin; David, Pierre; Idasiak, Vincent; Kratz, Frederic

    2013-01-01

    This article outlines the need for a reliability database to implement model-based description of components failure modes and dysfunctional behaviors. We detail the requirements such a database should honor and describe our own solution: the Dysfunctional Behavior Database (DBD). Through the description of its meta-model, the benefits of integrating the DBD in the system design process is highlighted. The main advantages depicted are the possibility to manage feedback knowledge at various granularity and semantic levels and to ease drastically the interactions between system engineering activities and reliability studies. The compliance of the DBD with other reliability database such as FIDES is presented and illustrated. - Highlights: ► Model-Based System Engineering is more and more used in the industry. ► It results in a need for a reliability database able to deal with model-based description of dysfunctional behavior. ► The Dysfunctional Behavior Database aims to fulfill that need. ► It helps dealing with feedback management thanks to its structured meta-model. ► The DBD can profit from other reliability database such as FIDES.

  8. Value-Added Models for Teacher Preparation Programs: Validity and Reliability Threats, and a Manageable Alternative

    Science.gov (United States)

    Brady, Michael P.; Heiser, Lawrence A.; McCormick, Jazarae K.; Forgan, James

    2016-01-01

    High-stakes standardized student assessments are increasingly used in value-added evaluation models to connect teacher performance to P-12 student learning. These assessments are also being used to evaluate teacher preparation programs, despite validity and reliability threats. A more rational model linking student performance to candidates who…

  9. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  10. Life cycle reliability assessment of new products—A Bayesian model updating approach

    International Nuclear Information System (INIS)

    Peng, Weiwen; Huang, Hong-Zhong; Li, Yanfeng; Zuo, Ming J.; Xie, Min

    2013-01-01

    The rapidly increasing pace and continuously evolving reliability requirements of new products have made life cycle reliability assessment of new products an imperative yet difficult work. While much work has been done to separately estimate reliability of new products in specific stages, a gap exists in carrying out life cycle reliability assessment throughout all life cycle stages. We present a Bayesian model updating approach (BMUA) for life cycle reliability assessment of new products. Novel features of this approach are the development of Bayesian information toolkits by separately including “reliability improvement factor” and “information fusion factor”, which allow the integration of subjective information in a specific life cycle stage and the transition of integrated information between adjacent life cycle stages. They lead to the unique characteristics of the BMUA in which information generated throughout life cycle stages are integrated coherently. To illustrate the approach, an application to the life cycle reliability assessment of a newly developed Gantry Machining Center is shown

  11. Business Cases for Microgrids: Modeling Interactions of Technology Choice, Reliability, Cost, and Benefit

    Science.gov (United States)

    Hanna, Ryan

    Distributed energy resources (DERs), and increasingly microgrids, are becoming an integral part of modern distribution systems. Interest in microgrids--which are insular and autonomous power networks embedded within the bulk grid--stems largely from the vast array of flexibilities and benefits they can offer stakeholders. Managed well, they can improve grid reliability and resiliency, increase end-use energy efficiency by coupling electric and thermal loads, reduce transmission losses by generating power locally, and may reduce system-wide emissions, among many others. Whether these public benefits are realized, however, depends on whether private firms see a "business case", or private value, in investing. To this end, firms need models that evaluate costs, benefits, risks, and assumptions that underlie decisions to invest. The objectives of this dissertation are to assess the business case for microgrids that provide what industry analysts forecast as two primary drivers of market growth--that of providing energy services (similar to an electric utility) as well as reliability service to customers within. Prototypical first adopters are modeled--using an existing model to analyze energy services and a new model that couples that analysis with one of reliability--to explore interactions between technology choice, reliability, costs, and benefits. The new model has a bi-level hierarchy; it uses heuristic optimization to select and size DERs and analytical optimization to schedule them. It further embeds Monte Carlo simulation to evaluate reliability as well as regression models for customer damage functions to monetize reliability. It provides least-cost microgrid configurations for utility customers who seek to reduce interruption and operating costs. Lastly, the model is used to explore the impact of such adoption on system-wide greenhouse gas emissions in California. Results indicate that there are, at present, co-benefits for emissions reductions when customers

  12. A sensitive fluorescent probe for the polar solvation dynamics at protein-surfactant interfaces.

    Science.gov (United States)

    Singh, Priya; Choudhury, Susobhan; Singha, Subhankar; Jun, Yongwoong; Chakraborty, Sandipan; Sengupta, Jhimli; Das, Ranjan; Ahn, Kyo-Han; Pal, Samir Kumar

    2017-05-17

    Relaxation dynamics at the surface of biologically important macromolecules is important taking into account their functionality in molecular recognition. Over the years it has been shown that the solvation dynamics of a fluorescent probe at biomolecular surfaces and interfaces account for the relaxation dynamics of polar residues and associated water molecules. However, the sensitivity of the dynamics depends largely on the localization and exposure of the probe. For noncovalent fluorescent probes, localization at the region of interest in addition to surface exposure is an added challenge compared to the covalently attached probes at the biological interfaces. Here we have used a synthesized donor-acceptor type dipolar fluorophore, 6-acetyl-(2-((4-hydroxycyclohexyl)(methyl)amino)naphthalene) (ACYMAN), for the investigation of the solvation dynamics of a model protein-surfactant interface. A significant structural rearrangement of a model histone protein (H1) upon interaction with anionic surfactant sodium dodecyl sulphate (SDS) as revealed from the circular dichroism (CD) studies is nicely corroborated in the solvation dynamics of the probe at the interface. The polarization gated fluorescence anisotropy of the probe compared to that at the SDS micellar surface clearly reveals the localization of the probe at the protein-surfactant interface. We have also compared the sensitivity of ACYMAN with other solvation probes including coumarin 500 (C500) and 4-(dicyanomethylene)-2-methyl-6-(p-dimethylamino-styryl)-4H-pyran (DCM). In comparison to ACYMAN, both C500 and DCM fail to probe the interfacial solvation dynamics of a model protein-surfactant interface. While C500 is found to be delocalized from the protein-surfactant interface, DCM becomes destabilized upon the formation of the interface (protein-surfactant complex). The timescales obtained from this novel probe have also been compared with other femtosecond resolved studies and molecular dynamics simulations.

  13. Reliability Measure Model for Assistive Care Loop Framework Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Venki Balasubramanian

    2010-01-01

    Full Text Available Body area wireless sensor networks (BAWSNs are time-critical systems that rely on the collective data of a group of sensor nodes. Reliable data received at the sink is based on the collective data provided by all the source sensor nodes and not on individual data. Unlike conventional reliability, the definition of retransmission is inapplicable in a BAWSN and would only lead to an elapsed data arrival that is not acceptable for time-critical application. Time-driven applications require high data reliability to maintain detection and responses. Hence, the transmission reliability for the BAWSN should be based on the critical time. In this paper, we develop a theoretical model to measure a BAWSN's transmission reliability, based on the critical time. The proposed model is evaluated through simulation and then compared with the experimental results conducted in our existing Active Care Loop Framework (ACLF. We further show the effect of the sink buffer in transmission reliability after a detailed study of various other co-existing parameters.

  14. Reliability modeling of digital RPS with consideration of undetected software faults

    Energy Technology Data Exchange (ETDEWEB)

    Khalaquzzaman, M.; Lee, Seung Jun; Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Man Cheol [Chung Ang Univ., Seoul (Korea, Republic of)

    2013-10-15

    This paper provides overview of different software reliability methodologies and proposes a technic for estimating the reliability of RPS with consideration of undetected software faults. Software reliability analysis of safety critical software has been challenging despite spending a huge effort for developing large number of software reliability models, and no consensus yet to attain on an appropriate modeling methodology. However, it is realized that the combined application of BBN based SDLC fault prediction method and random black-box testing of software would provide better ground for reliability estimation of safety critical software. Digitalizing the reactor protection system of nuclear power plant has been initiated several decades ago and now full digitalization has been adopted in the new generation of NPPs around the world because digital I and C systems have many better technical features like easier configurability and maintainability over analog I and C systems. Digital I and C systems are also drift-free and incorporation of new features is much easier. Rules and regulation for safe operation of NPPs are established and has been being practiced by the operators as well as regulators of NPPs to ensure safety. The failure mechanism of hardware and analog systems well understood and the risk analysis methods for these components and systems are well established. However, digitalization of I and C system in NPP introduces some crisis and uncertainty in reliability analysis methods of the digital systems/components because software failure mechanisms are still unclear.

  15. A multi-state reliability evaluation model for P2P networks

    International Nuclear Information System (INIS)

    Fan Hehong; Sun Xiaohan

    2010-01-01

    The appearance of new service types and the convergence tendency of the communication networks have endowed the networks more and more P2P (peer to peer) properties. These networks can be more robust and tolerant for a series of non-perfect operational states due to the non-deterministic server-client distributions. Thus a reliability model taking into account of the multi-state and non-deterministic server-client distribution properties is needed for appropriate evaluation of the networks. In this paper, two new performance measures are defined to quantify the overall and local states of the networks. A new time-evolving state-transition Monte Carlo (TEST-MC) simulation model is presented for the reliability analysis of P2P networks in multiple states. The results show that the model is not only valid for estimating the traditional binary-state network reliability parameters, but also adequate for acquiring the parameters in a series of non-perfect operational states, with good efficiencies, especially for highly reliable networks. Furthermore, the model is versatile for the reliability and maintainability analyses in that both the links and the nodes can be failure-prone with arbitrary life distributions, and various maintainability schemes can be applied.

  16. Modelling of nuclear power plant control and instrumentation elements for automatic disturbance and reliability analysis

    International Nuclear Information System (INIS)

    Hollo, E.

    1985-08-01

    Present Final Report summarizes results of R/D work done within IAEA-VEIKI (Institute for Electrical Power Research, Budapest, Hungary) Research Contract No. 3210 during 3 years' period of 01.08.1982 - 31.08.1985. Chapter 1 lists main research objectives of the project. Main results obtained are summarized in Chapters 2 and 3. Outcomes from development of failure modelling methodologies and their application for C/I components of WWER-440 units are as follows (Chapter 2): improvement of available ''failure mode and effect analysis'' methods and mini-fault tree structures usable for automatic disturbance (DAS) and reliability (RAS) analysis; general classification and determination of functional failure modes of WWER-440 NPP C/I components; set up of logic models for motor operated control valves and rod control/drive mechanism. Results of development of methods and their application for reliability modelling of NPP components and systems cover (Chapter 3): development of an algorithm (computer code COMPREL) for component-related failure and reliability parameter calculation; reliability analysis of PAKS II NPP diesel system; definition of functional requirements for reliability data bank (RDB) in WWER-440 units. Determination of RDB input/output data structure and data manipulation services. Methods used are a-priori failure mode and effect analysis, combined fault tree/event tree modelling technique, structural computer programming, probability theory application to nuclear field

  17. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  18. Reliability Analysis of Sealing Structure of Electromechanical System Based on Kriging Model

    Science.gov (United States)

    Zhang, F.; Wang, Y. M.; Chen, R. W.; Deng, W. W.; Gao, Y.

    2018-05-01

    The sealing performance of aircraft electromechanical system has a great influence on flight safety, and the reliability of its typical seal structure is analyzed by researcher. In this paper, we regard reciprocating seal structure as a research object to study structural reliability. Having been based on the finite element numerical simulation method, the contact stress between the rubber sealing ring and the cylinder wall is calculated, and the relationship between the contact stress and the pressure of the hydraulic medium is built, and the friction force on different working conditions are compared. Through the co-simulation, the adaptive Kriging model obtained by EFF learning mechanism is used to describe the failure probability of the seal ring, so as to evaluate the reliability of the sealing structure. This article proposes a new idea of numerical evaluation for the reliability analysis of sealing structure, and also provides a theoretical basis for the optimal design of sealing structure.

  19. [Reliability study in the measurement of the cusp inclination angle of a chairside digital model].

    Science.gov (United States)

    Xinggang, Liu; Xiaoxian, Chen

    2018-02-01

    This study aims to evaluate the reliability of the software Picpick in the measurement of the cusp inclination angle of a digital model. Twenty-one trimmed models were used as experimental objects. The chairside digital impression was then used for the acquisition of 3D digital models, and the software Picpick was employed for the measurement of the cusp inclination of these models. The measurements were repeated three times, and the results were compared with a gold standard, which was a manually measured experimental model cusp angle. The intraclass correlation coefficient (ICC) was calculated. The paired t test value of the two measurement methods was 0.91. The ICCs between the two measurement methods and three repeated measurements were greater than 0.9. The digital model achieved a smaller coefficient of variation (9.9%). The software Picpick is reliable in measuring the cusp inclination of a digital model.

  20. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  1. Reliability modeling of degradation of products with multiple performance characteristics based on gamma processes

    International Nuclear Information System (INIS)

    Pan Zhengqiang; Balakrishnan, Narayanaswamy

    2011-01-01

    Many highly reliable products usually have complex structure, with their reliability being evaluated by two or more performance characteristics. In certain physical situations, the degradation of these performance characteristics would be always positive and strictly increasing. In such a case, the gamma process is usually considered as a degradation process due to its independent and non-negative increments properties. In this paper, we suppose that a product has two dependent performance characteristics and that their degradation can be modeled by gamma processes. For such a bivariate degradation involving two performance characteristics, we propose to use a bivariate Birnbaum-Saunders distribution and its marginal distributions to approximate the reliability function. Inferential method for the corresponding model parameters is then developed. Finally, for an illustration of the proposed model and method, a numerical example about fatigue cracks is discussed and some computational results are presented.

  2. Inter-arch digital model vs. manual cast measurements: Accuracy and reliability.

    Science.gov (United States)

    Kiviahde, Heikki; Bukovac, Lea; Jussila, Päivi; Pesonen, Paula; Sipilä, Kirsi; Raustia, Aune; Pirttiniemi, Pertti

    2017-06-28

    The purpose of this study was to evaluate the accuracy and reliability of inter-arch measurements using digital dental models and conventional dental casts. Thirty sets of dental casts with permanent dentition were examined. Manual measurements were done with a digital caliper directly on the dental casts, and digital measurements were made on 3D models by two independent examiners. Intra-class correlation coefficients (ICC), a paired sample t-test or Wilcoxon signed-rank test, and Bland-Altman plots were used to evaluate intra- and inter-examiner error and to determine the accuracy and reliability of the measurements. The ICC values were generally good for manual and excellent for digital measurements. The Bland-Altman plots of all the measurements showed good agreement between the manual and digital methods and excellent inter-examiner agreement using the digital method. Inter-arch occlusal measurements on digital models are accurate and reliable and are superior to manual measurements.

  3. Comparative analysis among deterministic and stochastic collision damage models for oil tanker and bulk carrier reliability

    Directory of Open Access Journals (Sweden)

    A. Campanile

    2018-01-01

    Full Text Available The incidence of collision damage models on oil tanker and bulk carrier reliability is investigated considering the IACS deterministic model against GOALDS/IMO database statistics for collision events, substantiating the probabilistic model. Statistical properties of hull girder residual strength are determined by Monte Carlo simulation, based on random generation of damage dimensions and a modified form of incremental-iterative method, to account for neutral axis rotation and equilibrium of horizontal bending moment, due to cross-section asymmetry after collision events. Reliability analysis is performed, to investigate the incidence of collision penetration depth and height statistical properties on hull girder sagging/hogging failure probabilities. Besides, the incidence of corrosion on hull girder residual strength and reliability is also discussed, focussing on gross, hull girder net and local net scantlings, respectively. The ISSC double hull oil tanker and single side bulk carrier, assumed as test cases in the ISSC 2012 report, are taken as reference ships.

  4. Strong Stretching of Poly(ethylene glycol) Brushes Mediated by Ionic Liquid Solvation.

    Science.gov (United States)

    Han, Mengwei; Espinosa-Marzal, Rosa M

    2017-09-07

    We have measured forces between mica surfaces coated with a poly(ethylene glycol) (PEG) brush solvated by a vacuum-dry ionic liquid, 1-ethyl-3-methyl imidazolium bis(trifluoromethylsulfonyl)imide, with a surface forces apparatus. At high grafting density, the solvation mediated by the ionic liquid causes the brush to stretch twice as much as in water. Modeling of the steric repulsion indicates that PEG behaves as a polyelectrolyte; the hydrogen bonding between ethylene glycol and the imidazolium cation seems to effectively charge the polymer brush, which justifies the strong stretching. Importantly, under strong polymer compression, solvation layers are squeezed out at a higher rate than for the neat ionic liquid. We propose that the thermal fluctuations of the PEG chains, larger in the brush than in the mushroom configuration, maintain the fluidity of the ionic liquid under strong compression, in contrast to the solid-like squeezing-out behavior of the neat ionic liquid. This is the first experimental study of the behavior of a polymer brush solvated by an ionic liquid under nanoconfinement.

  5. Solvent density inhomogeneities and solvation free energies in supercritical diatomic fluids: a density functional approach.

    Science.gov (United States)

    Husowitz, B; Talanquer, V

    2007-02-07

    Density functional theory is used to explore the solvation properties of a spherical solute immersed in a supercritical diatomic fluid. The solute is modeled as a hard core Yukawa particle surrounded by a diatomic Lennard-Jones fluid represented by two fused tangent spheres using an interaction site approximation. The authors' approach is particularly suitable for thoroughly exploring the effect of different interaction parameters, such as solute-solvent interaction strength and range, solvent-solvent long-range interactions, and particle size, on the local solvent structure and the solvation free energy under supercritical conditions. Their results indicate that the behavior of the local coordination number in homonuclear diatomic fluids follows trends similar to those reported in previous studies for monatomic fluids. The local density augmentation is particularly sensitive to changes in solute size and is affected to a lesser degree by variations in the solute-solvent interaction strength and range. The associated solvation free energies exhibit a nonmonotonous behavior as a function of density for systems with weak solute-solvent interactions. The authors' results suggest that solute-solvent interaction anisotropies have a major influence on the nature and extent of local solvent density inhomogeneities and on the value of the solvation free energies in supercritical solutions of heteronuclear molecules.

  6. Development of Markov model of emergency diesel generator for dynamic reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Young Ho; Choi, Sun Yeong; Yang, Joon Eon [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-02-01

    The EDG (Emergency Diesal Generator) of nuclear power plant is one of the most important equipments in mitigating accidents. The FT (Fault Tree) method is widely used to assess the reliability of safety systems like an EDG in nuclear power plant. This method, however, has limitations in modeling dynamic features of safety systems exactly. We, hence, have developed a Markov model to represent the stochastic process of dynamic systems whose states change as time moves on. The Markov model enables us to develop a dynamic reliability model of EDG. This model can represent all possible states of EDG comparing to the FRANTIC code developed by U.S. NRC for the reliability analysis of standby systems. to access the regulation policy for test interval, we performed two simulations based on the generic data and plant specific data of YGN 3, respectively by using the developed model. We also estimate the effects of various repair rates and the fractions of starting failures by demand shock to the reliability of EDG. And finally, Aging effect is analyzed. (author). 23 refs., 19 figs., 9 tabs.

  7. A discrete-time Bayesian network reliability modeling and analysis framework

    International Nuclear Information System (INIS)

    Boudali, H.; Dugan, J.B.

    2005-01-01

    Dependability tools are becoming an indispensable tool for modeling and analyzing (critical) systems. However the growing complexity of such systems calls for increasing sophistication of these tools. Dependability tools need to not only capture the complex dynamic behavior of the system components, but they must be also easy to use, intuitive, and computationally efficient. In general, current tools have a number of shortcomings including lack of modeling power, incapacity to efficiently handle general component failure distributions, and ineffectiveness in solving large models that exhibit complex dependencies between their components. We propose a novel reliability modeling and analysis framework based on the Bayesian network (BN) formalism. The overall approach is to investigate timed Bayesian networks and to find a suitable reliability framework for dynamic systems. We have applied our methodology to two example systems and preliminary results are promising. We have defined a discrete-time BN reliability formalism and demonstrated its capabilities from a modeling and analysis point of view. This research shows that a BN based reliability formalism is a powerful potential solution to modeling and analyzing various kinds of system components behaviors and interactions. Moreover, being based on the BN formalism, the framework is easy to use and intuitive for non-experts, and provides a basis for more advanced and useful analyses such as system diagnosis

  8. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment

    Science.gov (United States)

    Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe

    2018-01-01

    In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.

  9. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  10. An overview of erosion corrosion models and reliability assessment for corrosion defects in piping system

    International Nuclear Information System (INIS)

    Srividya, A.; Suresh, H.N.; Verma, A.K.; Gopika, V.; Santosh

    2006-01-01

    Piping systems are part of passive structural elements in power plants. The analysis of the piping systems and their quantification in terms of failure probability is of utmost importance. The piping systems may fail due to various degradation mechanisms like thermal fatigue, erosion-corrosion, stress corrosion cracking and vibration fatigue. On examination of previous results, erosion corrosion was more prevalent and wall thinning is a time dependent phenomenon. The paper is intended to consolidate the work done by various investigators on erosion corrosion in estimating the erosion corrosion rate and reliability predictions. A comparison of various erosion corrosion models is made. The reliability predictions based on remaining strength of corroded pipelines by wall thinning is also attempted. Variables in the limit state functions are modelled using normal distributions and Reliability assessment is carried out using some of the existing failure pressure models. A steady state corrosion rate is assumed to estimate the corrosion defect and First Order Reliability Method (FORM) is used to find the probability of failure associated with corrosion defects over time using the software for Component Reliability evaluation (COMREL). (author)

  11. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  12. Intra-observer reliability and agreement of manual and digital orthodontic model analysis.

    Science.gov (United States)

    Koretsi, Vasiliki; Tingelhoff, Linda; Proff, Peter; Kirschneck, Christian

    2018-01-23

    Digital orthodontic model analysis is gaining acceptance in orthodontics, but its reliability is dependent on the digitalisation hardware and software used. We thus investigated intra-observer reliability and agreement / conformity of a particular digital model analysis work-flow in relation to traditional manual plaster model analysis. Forty-eight plaster casts of the upper/lower dentition were collected. Virtual models were obtained with orthoX®scan (Dentaurum) and analysed with ivoris®analyze3D (Computer konkret). Manual model analyses were done with a dial caliper (0.1 mm). Common parameters were measured on each plaster cast and its virtual counterpart five times each by an experienced observer. We assessed intra-observer reliability within method (ICC), agreement/conformity between methods (Bland-Altman analyses and Lin's concordance correlation), and changing bias (regression analyses). Intra-observer reliability was substantial within each method (ICC ≥ 0.7), except for five manual outcomes (12.8 per cent). Bias between methods was statistically significant, but less than 0.5 mm for 87.2 per cent of the outcomes. In general, larger tooth sizes were measured digitally. Total difference maxilla and mandible had wide limits of agreement (-3.25/6.15 and -2.31/4.57 mm), but bias between methods was mostly smaller than intra-observer variation within each method with substantial conformity of manual and digital measurements in general. No changing bias was detected. Although both work-flows were reliable, the investigated digital work-flow proved to be more reliable and yielded on average larger tooth sizes. Averaged differences between methods were within 0.5 mm for directly measured outcomes but wide ranges are expected for some computed space parameters due to cumulative error. © The Author 2017. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  13. A simulation model for reliability evaluation of Space Station power systems

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kumar, Mudit; Wagner, H.

    1988-01-01

    A detailed simulation model for the hybrid Space Station power system is presented which allows photovoltaic and solar dynamic power sources to be mixed in varying proportions. The model considers the dependence of reliability and storage characteristics during the sun and eclipse periods, and makes it possible to model the charging and discharging of the energy storage modules in a relatively accurate manner on a continuous basis.

  14. Nonpolar solvation dynamics for a nonpolar solute in room ...

    Indian Academy of Sciences (India)

    Sandipa Indra

    2018-01-30

    Jan 30, 2018 ... Keywords. Solvation dynamics; nonpolar solvation; ionic liquid; molecular dynamics; linear response theory. 1. ... J. Chem. Sci. (2018) 130:3 spectrum of the excited probe molecule for imida- .... Therefore, the solute and the RTIL ions interact only ... interval of 30 ps from a long equilibrium trajectory of dura-.

  15. Modeling reliability of power systems substations by using stochastic automata networks

    International Nuclear Information System (INIS)

    Šnipas, Mindaugas; Radziukynas, Virginijus; Valakevičius, Eimutis

    2017-01-01

    In this paper, stochastic automata networks (SANs) formalism to model reliability of power systems substations is applied. The proposed strategy allows reducing the size of state space of Markov chain model and simplifying system specification. Two case studies of standard configurations of substations are considered in detail. SAN models with different assumptions were created. SAN approach is compared with exact reliability calculation by using a minimal path set method. Modeling results showed that total independence of automata can be assumed for relatively small power systems substations with reliable equipment. In this case, the implementation of Markov chain model by a using SAN method is a relatively easy task. - Highlights: • We present the methodology to apply stochastic automata network formalism to create Markov chain models of power systems. • The stochastic automata network approach is combined with minimal path sets and structural functions. • Two models of substation configurations with different model assumptions are presented to illustrate the proposed methodology. • Modeling results of system with independent automata and functional transition rates are similar. • The conditions when total independence of automata can be assumed are addressed.

  16. Abacavir methanol 2.5-solvate

    Directory of Open Access Journals (Sweden)

    Phuong-Truc T. Pham

    2009-08-01

    Full Text Available The structure of abacavir (systematic name: {(1S,4R-4-[2-amino-6-(cyclopropylamino-9H-purin-9-yl]cyclopent-2-en-1-yl}methanol, C14H18N6O·2.5CH3OH, consists of hydrogen-bonded ribbons which are further held together by additional hydrogen bonds involving the hydroxyl group and two N atoms on an adjacent purine. The asymmetric unit also contains 2.5 molecules of methanol solvate which were grossly disordered and were excluded using SQUEEZE subroutine in PLATON [Spek, (2009. Acta Cryst. D65, 148–155].

  17. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    Science.gov (United States)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  18. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  19. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.

    2012-07-14

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due to uncertainties in input data, model assumptions, and modeling procedures. However, fault model uncertainties have usually been ignored in stress-triggering studies and have not been propagated to assess the reliability of Coulomb failure stress change (ΔCFS) calculations. We show how these uncertainties can be used to provide confidence intervals for co-seismic ΔCFS-values. We demonstrate this for the MW = 5.9 June 2000 Kleifarvatn earthquake in southwest Iceland and systematically map these uncertainties. A set of 2500 candidate source models from the full posterior fault-parameter distribution was used to compute 2500 ΔCFS maps. We assess the reliability of the ΔCFS-values from the coefficient of variation (CV) and deem ΔCFS-values to be reliable where they are at least twice as large as the standard deviation (CV ≤ 0.5). Unreliable ΔCFS-values are found near the causative fault and between lobes of positive and negative stress change, where a small change in fault strike causes ΔCFS-values to change sign. The most reliable ΔCFS-values are found away from the source fault in the middle of positive and negative ΔCFS-lobes, a likely general pattern. Using the reliability criterion, our results support the static stress-triggering hypothesis. Nevertheless, our analysis also suggests that results from previous stress-triggering studies not considering source model uncertainties may have lead to a biased interpretation of the importance of static stress-triggering.

  20. The application of cognitive models to the evaluation and prediction of human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.; Reason, J.T.

    1986-01-01

    The first section of the paper provides a brief overview of a number of important principles relevant to human reliability modeling that have emerged from cognitive models, and presents a synthesis of these approaches in the form of a Generic Error Modeling System (GEMS). The next section illustrates the application of GEMS to some well known nuclear power plant (NPP) incidents in which human error was a major contributor. The way in which design recommendations can emerge from analyses of this type is illustrated. The third section describes the use of cognitive models in the classification of human errors for prediction and data collection purposes. The final section addresses the predictive modeling of human error as part of human reliability assessment in Probabilistic Risk Assessment

  1. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  2. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  3. A Review of the Progress with Statistical Models of Passive Component Reliability

    Directory of Open Access Journals (Sweden)

    Bengt O.Y. Lydell

    2017-03-01

    Full Text Available During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models.

  4. A review of the progress with statistical models of passive component reliability

    Energy Technology Data Exchange (ETDEWEB)

    Lydell, Bengt O. Y. [Sigma-Phase Inc., Vail (United States)

    2017-03-15

    During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records) and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models.

  5. Development of web-based reliability data analysis algorithm model and its application

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

    2010-02-15

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  6. Reliability model for helicopter main gearbox lubrication system using influence diagrams

    International Nuclear Information System (INIS)

    Rashid, H.S.J.; Place, C.S.; Mba, D.; Keong, R.L.C.; Healey, A.; Kleine-Beek, W.; Romano, M.

    2015-01-01

    The loss of oil from a helicopter main gearbox (MGB) leads to increased friction between components, a rise in component surface temperatures, and subsequent mechanical failure of gearbox components. A number of significant helicopter accidents have been caused due to such loss of lubrication. This paper presents a model to assess the reliability of helicopter MGB lubricating systems. Safety risk modeling was conducted for MGB oil system related accidents in order to analyse key failure mechanisms and the contributory factors. Thus, the dominant failure modes for lubrication systems and key contributing components were identified. The Influence Diagram (ID) approach was then employed to investigate reliability issues of the MGB lubrication systems at the level of primary causal factors, thus systematically investigating a complex context of events, conditions, and influences that are direct triggers of the helicopter MGB lubrication system failures. The interrelationships between MGB lubrication system failure types were thus identified, and the influence of each of these factors on the overall MGB lubrication system reliability was assessed. This paper highlights parts of the HELMGOP project, sponsored by the European Aviation Safety Agency to improve helicopter main gearbox reliability. - Highlights: • We investigated methods to optimize helicopter MGB oil system run-dry capability. • Used Influence Diagram to assess design and maintenance factors of MGB oil system. • Factors influencing overall MGB lubrication system reliability were identified. • This globally influences current and future helicopter MGB designs

  7. A review of the progress with statistical models of passive component reliability

    International Nuclear Information System (INIS)

    Lydell, Bengt O. Y.

    2017-01-01

    During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records) and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models

  8. Improved radiograph measurement inter-observer reliability by use of statistical shape models

    Energy Technology Data Exchange (ETDEWEB)

    Pegg, E.C., E-mail: elise.pegg@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Mellon, S.J., E-mail: stephen.mellon@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Salmon, G. [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Alvand, A., E-mail: abtin.alvand@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Pandit, H., E-mail: hemant.pandit@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Murray, D.W., E-mail: david.murray@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Gill, H.S., E-mail: richie.gill@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom)

    2012-10-15

    Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements.

  9. Improved radiograph measurement inter-observer reliability by use of statistical shape models

    International Nuclear Information System (INIS)

    Pegg, E.C.; Mellon, S.J.; Salmon, G.; Alvand, A.; Pandit, H.; Murray, D.W.; Gill, H.S.

    2012-01-01

    Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements

  10. Reliability Analysis of an Extended Shock Model and Its Optimization Application in a Production Line

    Directory of Open Access Journals (Sweden)

    Renbin Liu

    2014-01-01

    some important reliability indices are derived, such as availability, failure frequency, mean vacation period, mean renewal cycle, mean startup period, and replacement frequency. Finally, a production line controlled by two cold-standby computers is modeled to present numerical illustration and its optimal part-time job policy at a maximum profit.

  11. Role of frameworks, models, data, and judgment in human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hannaman, G W

    1986-05-01

    Many advancements in the methods for treating human interactions in PRA studies have occurred in the last decade. These advancements appear to increase the capability of PRAs to extend beyond just the assessment of the human's importance to safety. However, variations in the application of these advanced models, data, and judgements in recent PRAs make quantitative comparisons among studies extremely difficult. This uncertainty in the analysis diminishes the usefulness of the PRA study for upgrading procedures, enhancing traning, simulator design, technical specification guidance, and for aid in designing the man-machine interface. Hence, there is a need for a framework to guide analysts in incorporating human interactions into the PRA systems analyses so that future users of a PRA study will have a clear understanding of the approaches, models, data, and assumptions which were employed in the initial study. This paper describes the role of the systematic human action reliability procedure (SHARP) in providing a road map through the complex terrain of human reliability that promises to improve the reproducibility of such analysis in the areas of selecting the models, data, representations, and assumptions. Also described is the role that a human cognitive reliability model can have in collecting data from simulators and helping analysts assign human reliability parameters in a PRA study. Use of these systematic approaches to perform or upgrade existing PRAs promises to make PRA studies more useful as risk management tools.

  12. Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling

    Science.gov (United States)

    Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.

    2012-01-01

    The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…

  13. A reliability model for interlayer dielectric cracking during fast thermal cycling

    NARCIS (Netherlands)

    Nguyen, Van Hieu; Salm, Cora; Krabbenborg, B.H.; Krabbenborg, B.H.; Bisschop, J.; Mouthaan, A.J.; Kuper, F.G.; Ray, Gary W.; Smy, Tom; Ohta, Tomohiro; Tsujimura, Manabu

    2003-01-01

    Interlayer dielectric (ILD) cracking can result in short circuits of multilevel interconnects. This paper presents a reliability model for ILD cracking induced by fast thermal cycling (FTC) stress. FTC tests have been performed under different temperature ranges (∆T) and minimum temperatures (Tmin).

  14. Machine Learning Approach for Software Reliability Growth Modeling with Infinite Testing Effort Function

    Directory of Open Access Journals (Sweden)

    Subburaj Ramasamy

    2017-01-01

    Full Text Available Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP model. We use Artificial Neural Network (ANN for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.

  15. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  16. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  17. Reliability of a new biokinetic model of zirconium in internal dosimetry: part I, parameter uncertainty analysis.

    Science.gov (United States)

    Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph

    2011-12-01

    The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a

  18. A Combined Reliability Model of VSC-HVDC Connected Offshore Wind Farms Considering Wind Speed Correlation

    DEFF Research Database (Denmark)

    Guo, Yifei; Gao, Houlei; Wu, Qiuwei

    2017-01-01

    and WTGs outage. The wind speed correlation between different WFs is included in the two-dimensional multistate WF model by using an improved k-means clustering method. Then, the entire system with two WFs and a threeterminal VSC-HVDC system is modeled as a multi-state generation unit. The proposed model...... is applied to the Roy Billinton test system (RBTS) for adequacy studies. Both the probability and frequency indices are calculated. The effectiveness and accuracy of the combined model is validated by comparing results with the sequential Monte Carlo simulation (MCS) method. The effects of the outage of VSC-HVDC...... system and wind speed correlation on the system reliability were analyzed. Sensitivity analyses were conducted to investigate the impact of repair time of the offshore VSC-HVDC system on system reliability....

  19. Reliability modelling of repairable systems using Petri nets and fuzzy Lambda-Tau methodology

    International Nuclear Information System (INIS)

    Knezevic, J.; Odoom, E.R.

    2001-01-01

    A methodology is developed which uses Petri nets instead of the fault tree methodology and solves for reliability indices utilising fuzzy Lambda-Tau method. Fuzzy set theory is used for representing the failure rate and repair time instead of the classical (crisp) set theory because fuzzy numbers allow expert opinions, linguistic variables, operating conditions, uncertainty and imprecision in reliability information to be incorporated into the system model. Petri nets are used because unlike the fault tree methodology, the use of Petri nets allows efficient simultaneous generation of minimal cut and path sets

  20. Model reliability and software quality assurance in simulation of nuclear fuel waste management systems

    International Nuclear Information System (INIS)

    Oeren, T.I.; Elzas, M.S.; Sheng, G.; Wageningen Agricultural Univ., Netherlands; McMaster Univ., Hamilton, Ontario)

    1985-01-01

    As is the case with all scientific simulation studies, computerized simulation of nuclear fuel waste management systems can introduce and hide various types of errors. Frameworks to clarify issues of model reliability and software quality assurance are offered. Potential problems with reference to the main areas of concern for reliability and quality are discussed; e.g., experimental issues, decomposition, scope, fidelity, verification, requirements, testing, correctness, robustness are treated with reference to the experience gained in the past. A list comprising over 80 most common computerization errors is provided. Software tools and techniques used to detect and to correct computerization errors are discussed

  1. Meeting Human Reliability Requirements through Human Factors Design, Testing, and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Boring

    2007-06-01

    In the design of novel systems, it is important for the human factors engineer to work in parallel with the human reliability analyst to arrive at the safest achievable design that meets design team safety goals and certification or regulatory requirements. This paper introduces the System Development Safety Triptych, a checklist of considerations for the interplay of human factors and human reliability through design, testing, and modeling in product development. This paper also explores three phases of safe system development, corresponding to the conception, design, and implementation of a system.

  2. Phd study of reliability and validity: One step closer to a standardized music therapy assessment model

    DEFF Research Database (Denmark)

    Jacobsen, Stine Lindahl

    The paper will present a phd study concerning reliability and validity of music therapy assessment model “Assessment of Parenting Competences” (APC) in the area of families with emotionally neglected children. This study had a multiple strategy design with a philosophical base of critical realism...... and pragmatism. The fixed design for this study was a between and within groups design in testing the APCs reliability and validity. The two different groups were parents with neglected children and parents with non-neglected children. The flexible design had a multiple case study strategy specifically...

  3. Assessment of Electronic Circuits Reliability Using Boolean Truth Table Modeling Method

    International Nuclear Information System (INIS)

    EI-Shanshoury, A.I.

    2011-01-01

    This paper explores the use of Boolean Truth Table modeling Method (BTTM) in the analysis of qualitative data. It is widely used in certain fields especially in the fields of electrical and electronic engineering. Our work focuses on the evaluation of power supply circuit reliability using (BTTM) which involves systematic attempts to falsify and identify hypotheses on the basis of truth tables constructed from qualitative data. Reliability parameters such as the system's failure rates for the power supply case study are estimated. All possible state combinations (operating and failed states) of the major components in the circuit were listed and their effects on overall system were studied

  4. Looking for the best experimental conditions to detail the protein solvation shell in a binary aqueous solvent via small angle scattering

    International Nuclear Information System (INIS)

    Ortore, Maria Grazia; Sinibaldi, Raffaele; Spinozzi, Francesco; Carbini, Andrea; Carsughi, Flavio; Mariani, Paolo

    2009-01-01

    Protein hydration features attract particular interest in different fields, from biology up to physics, crossing chemistry and medicine. Particular attention is devoted to proteins dissolved in binary aqueous mixtures, since the presence of cosolvent can induce modifications in structural and functional properties. We have recently developed a methodology to obtain a quantitative description on protein solvation shell by a set of in-solution small angle scattering experiments, simultaneously analysed by a global-fit approach. In this paper, numerical simulations of small angle scattering curves are presented to figure out the sensitivity of the technique to different experimental conditions. Simulations concern two model proteins of different molecular weights and an unique cosolvent. A reliability test is introduced in order to find the best experimental conditions to be investigated, together with the most suitable scattering probe (neutrons or X-rays).

  5. Applying the High Reliability Health Care Maturity Model to Assess Hospital Performance: A VA Case Study.

    Science.gov (United States)

    Sullivan, Jennifer L; Rivard, Peter E; Shin, Marlena H; Rosen, Amy K

    2016-09-01

    The lack of a tool for categorizing and differentiating hospitals according to their high reliability organization (HRO)-related characteristics has hindered progress toward implementing and sustaining evidence-based HRO practices. Hospitals would benefit both from an understanding of the organizational characteristics that support HRO practices and from knowledge about the steps necessary to achieve HRO status to reduce the risk of harm and improve outcomes. The High Reliability Health Care Maturity (HRHCM) model, a model for health care organizations' achievement of high reliability with zero patient harm, incorporates three major domains critical for promoting HROs-Leadership, Safety Culture, and Robust Process Improvement ®. A study was conducted to examine the content validity of the HRHCM model and evaluate whether it can differentiate hospitals' maturity levels for each of the model's components. Staff perceptions of patient safety at six US Department of Veterans Affairs (VA) hospitals were examined to determine whether all 14 HRHCM components were present and to characterize each hospital's level of organizational maturity. Twelve of the 14 components from the HRHCM model were detected; two additional characteristics emerged that are present in the HRO literature but not represented in the model-teamwork culture and system-focused tools for learning and improvement. Each hospital's level of organizational maturity could be characterized for 9 of the 14 components. The findings suggest the HRHCM model has good content validity and that there is differentiation between hospitals on model components. Additional research is needed to understand how these components can be used to build the infrastructure necessary for reaching high reliability.

  6. Discrete Address Beacon System (DABS) Software System Reliability Modeling and Prediction.

    Science.gov (United States)

    1981-06-01

    Service ( ATARS ) module because of its interim status. Reliability prediction models for software modules were derived and then verified by matching...System (A’iCR3BS) and thus can be introduced gradually and economically without ma jor olper- ational or procedural change. Since DABS uses monopulse...lineanaly- sis tools or are ured during maintenance or pre-initialization were not modeled because they are not part of the mission software. The ATARS

  7. System principles, mathematical models and methods to ensure high reliability of safety systems

    Science.gov (United States)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  8. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory

    International Nuclear Information System (INIS)

    Cacuci, D. G.; Cacuci, D. G.; Ionescu-Bujor, M.

    2008-01-01

    The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)

  9. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safety, D-76021 Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)

    2008-07-01

    The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)

  10. Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models

    Science.gov (United States)

    Al Hassan Mohammad; Novack, Steven

    2015-01-01

    Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.

  11. Modeling Message Queueing Services with Reliability Guarantee in Cloud Computing Environment Using Colored Petri Nets

    Directory of Open Access Journals (Sweden)

    Jing Li

    2015-01-01

    Full Text Available Motivated by the need for loosely coupled and asynchronous dissemination of information, message queues are widely used in large-scale application areas. With the advent of virtualization technology, cloud-based message queueing services (CMQSs with distributed computing and storage are widely adopted to improve availability, scalability, and reliability; however, a critical issue is its performance and the quality of service (QoS. While numerous approaches evaluating system performance are available, there is no modeling approach for estimating and analyzing the performance of CMQSs. In this paper, we employ both the analytical and simulation modeling to address the performance of CMQSs with reliability guarantee. We present a visibility-based modeling approach (VMA for simulation model using colored Petri nets (CPN. Our model incorporates the important features of message queueing services in the cloud such as replication, message consistency, resource virtualization, and especially the mechanism named visibility timeout which is adopted in the services to guarantee system reliability. Finally, we evaluate our model through different experiments under varied scenarios to obtain important performance metrics such as total message delivery time, waiting number, and components utilization. Our results reveal considerable insights into resource scheduling and system configuration for service providers to estimate and gain performance optimization.

  12. Practical applications of age-dependent reliability models and analysis of operational data

    Energy Technology Data Exchange (ETDEWEB)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L

    2005-07-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems.

  13. Practical applications of age-dependent reliability models and analysis of operational data

    International Nuclear Information System (INIS)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L.

    2005-01-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems

  14. Wave–particle interactions in a resonant system of photons and ion-solvated water

    Energy Technology Data Exchange (ETDEWEB)

    Konishi, Eiji, E-mail: konishi.eiji.27c@st.kyoto-u.ac.jp

    2017-02-26

    Highlights: • We consider a QED model of rotating water molecules with ion solvation effects. • The equations of motion are cast in terms of a conventional free electron laser. • We offer a new quantum coherence mechanism induced by collective instability. - Abstract: We investigate a laser model for a resonant system of photons and ion cluster-solvated rotating water molecules in which ions in the cluster are identical and have very low, non-relativistic velocities and direction of motion parallel to a static electric field induced in a single direction. This model combines Dicke superradiation with wave–particle interaction. As the result, we find that the equations of motion of the system are expressed in terms of a conventional free electron laser system. This result leads to a mechanism for dynamical coherence, induced by collective instability in the wave–particle interaction.

  15. Quantitative prediction of solvation free energy in octanol of organic compounds.

    Science.gov (United States)

    Delgado, Eduardo J; Jaña, Gonzalo A

    2009-03-01

    The free energy of solvation, DeltaGS0, in octanol of organic compounds is quantitatively predicted from the molecular structure. The model, involving only three molecular descriptors, is obtained by multiple linear regression analysis from a data set of 147 compounds containing diverse organic functions, namely, halogenated and non-halogenated alkanes, alkenes, alkynes, aromatics, alcohols, aldehydes, ketones, amines, ethers and esters; covering a DeltaGS0 range from about -50 to 0 kJ.mol(-1). The model predicts the free energy of solvation with a squared correlation coefficient of 0.93 and a standard deviation, 2.4 kJ.mol(-1), just marginally larger than the generally accepted value of experimental uncertainty. The involved molecular descriptors have definite physical meaning corresponding to the different intermolecular interactions occurring in the bulk liquid phase. The model is validated with an external set of 36 compounds not included in the training set.

  16. Quantitative Prediction of Solvation Free Energy in Octanol of Organic Compounds

    Directory of Open Access Journals (Sweden)

    Eduardo J. Delgado

    2009-03-01

    Full Text Available The free energy of solvation, ΔGS0 , in octanol of organic compunds is quantitatively predicted from the molecular structure. The model, involving only three molecular descriptors, is obtained by multiple linear regression analysis from a data set of 147 compounds containing diverse organic functions, namely, halogenated and non-halogenated alkanes, alkenes, alkynes, aromatics, alcohols, aldehydes, ketones, amines, ethers and esters; covering a ΔGS0 range from about –50 to 0 kJ·mol-1. The model predicts the free energy of solvation with a squared correlation coefficient of 0.93 and a standard deviation, 2.4 kJ·mol-1, just marginally larger than the generally accepted value of experimental uncertainty. The involved molecular descriptors have definite physical meaning corresponding to the different intermolecular interactions occurring in the bulk liquid phase. The model is validated with an external set of 36 compounds not included in the training set.

  17. Reliability of Soft Tissue Model Based Implant Surgical Guides; A Methodological Mistake.

    Science.gov (United States)

    Sabour, Siamak; Dastjerdi, Elahe Vahid

    2012-08-20

    Abstract We were interested to read the paper by Maney P and colleagues published in the July 2012 issue of J Oral Implantol. The authors aimed to assess the reliability of soft tissue model based implant surgical guides reported that the accuracy was evaluated using software. 1 I found the manuscript title of Maney P, et al. incorrect and misleading. Moreover, they reported twenty-two sites (46.81%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). As the authors point out in their conclusion, Soft tissue models do not always provide sufficient accuracy for implant surgical guide fabrication.Reliability (precision) and validity (accuracy) are two different methodological issues in researches. Sensitivity, specificity, PPV, NPV, likelihood ratio positive (true positive/false negative) and likelihood ratio negative (false positive/ true negative) as well as odds ratio (true results\\false results - preferably more than 50) are among the tests to evaluate the validity (accuracy) of a single test compared to a gold standard.2-4 It is not clear that the reported twenty-two sites (46.81%) which were considered accurate related to which of the above mentioned estimates for validity analysis. Reliability (repeatability or reproducibility) is being assessed by different statistical tests such as Pearson r, least square and paired t.test which all of them are among common mistakes in reliability analysis 5. Briefly, for quantitative variable Intra Class Correlation Coefficient (ICC) and for qualitative variables weighted kappa should be used with caution because kappa has its own limitation too. Regarding reliability or agreement, it is good to know that for computing kappa value, just concordant cells are being considered, whereas discordant cells should also be taking into account in order to reach a correct estimation of agreement (Weighted kappa).2-4 As a take home message, for reliability and validity analysis, appropriate tests should be

  18. A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The

  19. A reliability-risk modelling of nuclear rad-waste facilities

    International Nuclear Information System (INIS)

    Lehmann, P.H.; El-Bassioni, A.A.

    1975-01-01

    Rad-waste disposal systems of nuclear power sites are designed and operated to collect, delay, contain, and concentrate radioactive wastes from reactor plant processes such that on-site and off-site exposures to radiation are well below permissible limits. To assist the designer in achieving minimum release/exposure goals, a computerized reliability-risk model has been developed to simulate the rad-waste system. The objectives of the model are to furnish a practical tool for quantifying the effects of changes in system configuration, operation, and equipment, and for the identification of weak segments in the system design. Primarily, the model comprises a marriage of system analysis, reliability analysis, and release-risk assessment. Provisions have been included in the model to permit the optimization of the system design subject to constraints on cost and rad-releases. The system analysis phase involves the preparation of a physical and functional description of the rad-waste facility accompanied by the formation of a system tree diagram. The reliability analysis phase embodies the formulation of appropriate reliability models and the collection of model parameters. Release-risk assessment constitutes the analytical basis whereupon further system and reliability analyses may be warranted. Release-risk represents the potential for release of radioactivity and is defined as the product of an element's unreliability at time, t, and the radioactivity available for release in time interval, Δt. A computer code (RARISK) has been written to simulate the tree diagram of the rad-waste system. Reliability and release-risk results have been generated for cases which examined the process flow paths of typical rad-waste systems, the effects of repair and standby, the variations of equipment failure and repair rates, and changes in system configurations. The essential feature of this model is that a complex system like the rad-waste facility can be easily decomposed into its

  20. Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices

    Science.gov (United States)

    Michaelides, Stylianos

    Flip Chip on Board (FCOB) and Chip-Scale Packages (CSPs) are relatively new technologies that are being increasingly used in the electronic packaging industry. Compared to the more widely used face-up wirebonding and TAB technologies, flip-chips and most CSPs provide the shortest possible leads, lower inductance, higher frequency, better noise control, higher density, greater input/output (I/O), smaller device footprint and lower profile. However, due to the short history and due to the introduction of several new electronic materials, designs, and processing conditions, very limited work has been done to understand the role of material, geometry, and processing parameters on the reliability of flip-chip devices. Also, with the ever-increasing complexity of semiconductor packages and with the continued reduction in time to market, it is too costly to wait until the later stages of design and testing to discover that the reliability is not satisfactory. The objective of the research is to develop integrated process-reliability models that will take into consideration the mechanics of assembly processes to be able to determine the reliability of face-down devices under thermal cycling and long-term temperature dwelling. The models incorporate the time and temperature-dependent constitutive behavior of various materials in the assembly to be able to predict failure modes such as die cracking and solder cracking. In addition, the models account for process-induced defects and macro-micro features of the assembly. Creep-fatigue and continuum-damage mechanics models for the solder interconnects and fracture-mechanics models for the die have been used to determine the reliability of the devices. The results predicted by the models have been successfully validated against experimental data. The validated models have been used to develop qualification and test procedures for implantable medical devices. In addition, the research has helped develop innovative face

  1. On New Cautious Structural Reliability Models in the Framework of imprecise Probabilities

    DEFF Research Database (Denmark)

    Utkin, Lev V.; Kozine, Igor

    2010-01-01

    models and gen-eralizing conventional ones to imprecise probabili-ties. The theoretical setup employed for this purpose is imprecise statistical reasoning (Walley 1991), whose general framework is provided by upper and lower previsions (expectations). The appeal of this theory is its ability to capture......Uncertainty of parameters in engineering design has been modeled in different frameworks such as inter-val analysis, fuzzy set and possibility theories, ran-dom set theory and imprecise probability theory. The authors of this paper for many years have been de-veloping new imprecise reliability...... both aleatory (stochas-tic) and epistemic uncertainty and the flexibility with which information can be represented. The previous research of the authors related to generalizing structural reliability models to impre-cise statistical measures is summarized in Utkin & Kozine (2002) and Utkin (2004...

  2. Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol

    Science.gov (United States)

    Montgomery, Todd; Callahan, John R.; Whetten, Brian

    1996-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  3. Reliability and Efficiency of Generalized Rumor Spreading Model on Complex Social Networks

    International Nuclear Information System (INIS)

    Naimi, Yaghoob; Naimi, Mohammad

    2013-01-01

    We introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks. Despite pervious rumor models that both the spreader-spreader (SS) and the spreader-stifler (SR) interactions have the same rate α, we define α (1) and α (2) for SS and SR interactions, respectively. The effect of variation of α (1) and α (2) on the final density of stiflers is investigated. Furthermore, the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency. Our results show that while networks with homogeneous connectivity patterns reach a higher reliability, scale-free topologies need a less time to reach a steady state with respect the rumor. (interdisciplinary physics and related areas of science and technology)

  4. LED Lighting System Reliability Modeling and Inference via Random Effects Gamma Process and Copula Function

    Directory of Open Access Journals (Sweden)

    Huibing Hao

    2015-01-01

    Full Text Available Light emitting diode (LED lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color, it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method.

  5. Reliability and Maintainability model (RAM) user and maintenance manual. Part 2

    Science.gov (United States)

    Ebeling, Charles E.

    1995-01-01

    This report documents the procedures for utilizing and maintaining the Reliability and Maintainability Model (RAM) developed by the University of Dayton for the NASA Langley Research Center (LaRC). The RAM model predicts reliability and maintainability (R&M) parameters for conceptual space vehicles using parametric relationships between vehicle design and performance characteristics and subsystem mean time between maintenance actions (MTBM) and manhours per maintenance action (MH/MA). These parametric relationships were developed using aircraft R&M data from over thirty different military aircraft of all types. This report describes the general methodology used within the model, the execution and computational sequence, the input screens and data, the output displays and reports, and study analyses and procedures. A source listing is provided.

  6. Knowledge modelling and reliability processing: presentation of the Figaro language and associated tools

    International Nuclear Information System (INIS)

    Bouissou, M.; Villatte, N.; Bouhadana, H.; Bannelier, M.

    1991-12-01

    EDF has been developing for several years an integrated set of knowledge-based and algorithmic tools for automation of reliability assessment of complex (especially sequential) systems. In this environment, the reliability expert has at his disposal all the powerful software tools for qualitative and quantitative processing, besides he gets various means to generate automatically the inputs for these tools, through the acquisition of graphical data. The development of these tools has been based on FIGARO, a specific language, which was built to get an homogeneous system modelling. Various compilers and interpreters get a FIGARO model into conventional models, such as fault-trees, Markov chains, Petri Networks. In this report, we introduce the main basics of FIGARO language, illustrating them with examples

  7. A reliability model of a warm standby configuration with two identical sets of units

    International Nuclear Information System (INIS)

    Huang, Wei; Loman, James; Song, Thomas

    2015-01-01

    This article presents a new reliability model and the development of its analytical solution for a warm standby redundant configuration with units that are originally operated in active mode, and then, upon turn-on of originally standby units, are put into warm standby mode. These units can be used later if a standby- turned into active-unit fails. Numerical results of an example configuration are presented and discussed with comparison to other warm standby configurations, and to Monte Carlo simulation results obtained from BlockSim software. Results show that the Monte Carlo simulation model gives virtually identical reliability value when the simulation uses a high number of replications, confirming the developed model. - Highlights: • A new reliability model is developed for a warm standby redundancy with two sets of identical units. • The units subject to state change from active to standby then back to active mode. • A closed form analytical solution is developed with exponential distribution. • To validate the developed model, a Monte Carlo simulation for an exemplary configuration is performed

  8. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    Science.gov (United States)

    Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-04-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.

  9. A competing risk model for the reliability of cylinder liners in marine Diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Bocchetti, D. [Grimaldi Group, Naples (Italy); Giorgio, M. [Department of Aerospace and Mechanical Engineering, Second University of Naples, Aversa (Italy); Guida, M. [Department of Information Engineering and Electrical Engineering, University of Salerno, Fisciano (Italy); Pulcini, G. [Istituto Motori, National Research Council-CNR, Naples (Italy)], E-mail: g.pulcini@im.cnr.it

    2009-08-15

    In this paper, a competing risk model is proposed to describe the reliability of the cylinder liners of a marine Diesel engine. Cylinder liners presents two dominant failure modes: wear degradation and thermal cracking. The wear process is described through a stochastic process, whereas the failure time due to the thermal cracking is described by the Weibull distribution. The use of the proposed model allows performing goodness-of-fit test and parameters estimation on the basis of both wear and failure data. Moreover, it enables reliability estimates of the state of the liners to be obtained and the hierarchy of the failure mechanisms to be determined for any given age and wear level of the liner. The model has been applied to a real data set: 33 cylinder liners of Sulzer RTA 58 engines, which equip twin ships of the Grimaldi Group. Estimates of the liner reliability and of other quantities of interest under the competing risk model are obtained, as well as the conditional failure probability and mean residual lifetime, given the survival age and the accumulated wear. Furthermore, the model has been used to estimate the probability that a liner fails due to one of the failure modes when both of these modes act.

  10. Age-dependent reliability model considering effects of maintenance and working conditions

    International Nuclear Information System (INIS)

    Martorell, Sebastian; Sanchez, Ana; Serradell, Vicente

    1999-01-01

    Nowadays, there is some doubt about building new nuclear power plants (NPPs). Instead, there is a growing interest in analyzing the possibility to extend current NPP operation, where life management programs play an important role. The evolution of the NPP safety depends on the evolution of the reliability of its safety components, which, in turn, is a function of their age along the NPP operational life. In this paper, a new age-dependent reliability model is presented, which includes parameters related to surveillance and maintenance effectiveness and working conditions of the equipment, both environmental and operational. This model may be used to support NPP life management and life extension programs, by improving or optimizing surveillance and maintenance tasks using risk and cost models based on such an age-dependent reliability model. The results of the sensitivity study in the example application show that the selection of the most appropriate maintenance strategy would directly depend on the previous parameters. Then, very important differences are expected to appear under certain circumstances, particularly, in comparison with other models that do not consider maintenance effectiveness and working conditions simultaneously

  11. solveME: fast and reliable solution of nonlinear ME models

    DEFF Research Database (Denmark)

    Yang, Laurence; Ma, Ding; Ebrahim, Ali

    2016-01-01

    Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstr......Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic...... reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Results: Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models...

  12. An Open Modelling Approach for Availability and Reliability of Systems - OpenMARS

    CERN Document Server

    Penttinen, Jussi-Pekka; Gutleber, Johannes

    2018-01-01

    This document introduces and gives specification for OpenMARS, which is an open modelling approach for availability and reliability of systems. It supports the most common risk assessment and operation modelling techniques. Uniquely OpenMARS allows combining and connecting models defined with different techniques. This ensures that a modeller has a high degree of freedom to accurately describe the modelled system without limitations imposed by an individual technique. Here the OpenMARS model definition is specified with a tool independent tabular format, which supports managing models developed in a collaborative fashion. Origin of our research is in Future Circular Collider (FCC) study, where we developed the unique features of our concept to model the availability and luminosity production of particle colliders. We were motivated to describe our approach in detail as we see potential further applications in performance and energy efficiency analyses of large scientific infrastructures or industrial processe...

  13. Stochastic network interdiction optimization via capacitated network reliability modeling and probabilistic solution discovery

    International Nuclear Information System (INIS)

    Ramirez-Marquez, Jose Emmanuel; Rocco S, Claudio M.

    2009-01-01

    This paper introduces an evolutionary optimization approach that can be readily applied to solve stochastic network interdiction problems (SNIP). The network interdiction problem solved considers the minimization of the cost associated with an interdiction strategy such that the maximum flow that can be transmitted between a source node and a sink node for a fixed network design is greater than or equal to a given reliability requirement. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link and that such interdiction has a probability of being successful. This version of the SNIP is for the first time modeled as a capacitated network reliability problem allowing for the implementation of computation and solution techniques previously unavailable. The solution process is based on an evolutionary algorithm that implements: (1) Monte-Carlo simulation, to generate potential network interdiction strategies, (2) capacitated network reliability techniques to analyze strategies' source-sink flow reliability and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks are used throughout the paper to illustrate the approach

  14. Modeling the bathtub shape hazard rate function in terms of reliability

    International Nuclear Information System (INIS)

    Wang, K.S.; Hsu, F.S.; Liu, P.P.

    2002-01-01

    In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man-machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1-R. This representation denotes the memory characteristics of the second failure cause. Man-machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted 'bathtub' procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons

  15. Development of Probabilistic Reliability Models of Photovoltaic System Topologies for System Adequacy Evaluation

    Directory of Open Access Journals (Sweden)

    Ahmad Alferidi

    2017-02-01

    Full Text Available The contribution of solar power in electric power systems has been increasing rapidly due to its environmentally friendly nature. Photovoltaic (PV systems contain solar cell panels, power electronic converters, high power switching and often transformers. These components collectively play an important role in shaping the reliability of PV systems. Moreover, the power output of PV systems is variable, so it cannot be controlled as easily as conventional generation due to the unpredictable nature of weather conditions. Therefore, solar power has a different influence on generating system reliability compared to conventional power sources. Recently, different PV system designs have been constructed to maximize the output power of PV systems. These different designs are commonly adopted based on the scale of a PV system. Large-scale grid-connected PV systems are generally connected in a centralized or a string structure. Central and string PV schemes are different in terms of connecting the inverter to PV arrays. Micro-inverter systems are recognized as a third PV system topology. It is therefore important to evaluate the reliability contribution of PV systems under these topologies. This work utilizes a probabilistic technique to develop a power output model for a PV generation system. A reliability model is then developed for a PV integrated power system in order to assess the reliability and energy contribution of the solar system to meet overall system demand. The developed model is applied to a small isolated power unit to evaluate system adequacy and capacity level of a PV system considering the three topologies.

  16. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  17. Solvation of hydrocarbons in aqueous-organic mixtures

    International Nuclear Information System (INIS)

    Sedov, I.A.; Magsumov, T.I.; Solomonov, B.N.

    2016-01-01

    Highlights: • Thermodynamic functions of solvation in mixtures of water with acetone and acetonitrile are measured at T = 298.15 K. • Solvation of n-octane and toluene in aqueous-organic mixtures is studied. • When increasing water content, Gibbs free energies grow up steadily, while enthalpies have a maximum. • Hydrocarbons are preferentially solvated with organic cosolvent even in mixtures with rather high water content. • Acetonitrile suppresses the hydrophobic effect less than acetone. - Abstract: We study the solvation of two hydrocarbons, n-octane and toluene, in binary mixtures of water with organic cosolvents. Two polar aprotic cosolvents that are miscible with water in any proportions, acetonitrile and acetone, were considered. We determine the magnitudes of thermodynamic functions of dissolution and solvation at T = 298.15 K in the mixtures with various compositions. Solution calorimetry was used to measure the enthalpies of solution, and GC headspace analysis was applied to obtain limiting activity coefficients of solutes in the studied systems. For the first time, the enthalpies of solution of alkane in the mixtures with high water content were measured directly. We observed well-pronounced maxima of the dependencies of enthalpies of solvation from the composition of solvent and no maxima for the Gibbs free energies of solvation. Two factors are concluded to be important to explain the observed tendencies: high energy cost of reorganization of binary solvent upon insertion of solute molecules and preferential surrounding of hydrocarbons with the molecules of organic cosolvent. Enthalpy-entropy compensation leads to a steady growth of the Gibbs free energies with increasing water content. On the other hand, consideration of the plots of the Gibbs free energy against enthalpy of solvation clearly shows that the solvation properties are changed dramatically after addition of a rather small amount of organic cosolvents. It is shown that they

  18. A holistic framework of degradation modeling for reliability analysis and maintenance optimization of nuclear safety systems

    International Nuclear Information System (INIS)

    Lin, Yanhui

    2016-01-01

    Components of nuclear safety systems are in general highly reliable, which leads to a difficulty in modeling their degradation and failure behaviors due to the limited amount of data available. Besides, the complexity of such modeling task is increased by the fact that these systems are often subject to multiple competing degradation processes and that these can be dependent under certain circumstances, and influenced by a number of external factors (e.g. temperature, stress, mechanical shocks, etc.). In this complicated problem setting, this PhD work aims to develop a holistic framework of models and computational methods for the reliability-based analysis and maintenance optimization of nuclear safety systems taking into account the available knowledge on the systems, degradation and failure behaviors, their dependencies, the external influencing factors and the associated uncertainties.The original scientific contributions of the work are: (1) For single components, we integrate random shocks into multi-state physics models for component reliability analysis, considering general dependencies between the degradation and two types of random shocks. (2) For multi-component systems (with a limited number of components):(a) a piecewise-deterministic Markov process modeling framework is developed to treat degradation dependency in a system whose degradation processes are modeled by physics-based models and multi-state models; (b) epistemic uncertainty due to incomplete or imprecise knowledge is considered and a finite-volume scheme is extended to assess the (fuzzy) system reliability; (c) the mean absolute deviation importance measures are extended for components with multiple dependent competing degradation processes and subject to maintenance; (d) the optimal maintenance policy considering epistemic uncertainty and degradation dependency is derived by combining finite-volume scheme, differential evolution and non-dominated sorting differential evolution; (e) the

  19. Origin of parameter degeneracy and molecular shape relationships in geometric-flow calculations of solvation free energies

    Energy Technology Data Exchange (ETDEWEB)

    Daily, Michael D. [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Chun, Jaehun [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Heredia-Langner, Alejandro [National Security Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Wei, Guowei [Department of Mathematics, Michigan State University, East Lansing, Michigan 48824 (United States); Baker, Nathan A. [Computational and Statistical Analytics Division, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States)

    2013-11-28

    Implicit solvent models are important tools for calculating solvation free energies for chemical and biophysical studies since they require fewer computational resources but can achieve accuracy comparable to that of explicit-solvent models. In past papers, geometric flow-based solvation models have been established for solvation analysis of small and large compounds. In the present work, the use of realistic experiment-based parameter choices for the geometric flow models is studied. We find that the experimental parameters of solvent internal pressure p = 172 MPa and surface tension γ = 72 mN/m produce solvation free energies within 1 RT of the global minimum root-mean-squared deviation from experimental data over the expanded set. Our results demonstrate that experimental values can be used for geometric flow solvent model parameters, thus eliminating the need for additional parameterization. We also examine the correlations between optimal values of p and γ which are strongly anti-correlated. Geometric analysis of the small molecule test set shows that these results are inter-connected with an approximately linear relationship between area and volume in the range of molecular sizes spanned by the data set. In spite of this considerable degeneracy between the surface tension and pressure terms in the model, both terms are important for the broader applicability of the model.

  20. Stochastic reliability and maintenance modeling essays in honor of Professor Shunji Osaki on his 70th birthday

    CERN Document Server

    Nakagawa, Toshio

    2013-01-01

    In honor of the work of Professor Shunji Osaki, Stochastic Reliability and Maintenance Modeling provides a comprehensive study of the legacy of and ongoing research in stochastic reliability and maintenance modeling. Including associated application areas such as dependable computing, performance evaluation, software engineering, communication engineering, distinguished researchers review and build on the contributions over the last four decades by Professor Shunji Osaki. Fundamental yet significant research results are presented and discussed clearly alongside new ideas and topics on stochastic reliability and maintenance modeling to inspire future research. Across 15 chapters readers gain the knowledge and understanding to apply reliability and maintenance theory to computer and communication systems. Stochastic Reliability and Maintenance Modeling is ideal for graduate students and researchers in reliability engineering, and workers, managers and engineers engaged in computer, maintenance and management wo...

  1. Modeling Energy & Reliability of a CNT based WSN on an HPC Setup

    Directory of Open Access Journals (Sweden)

    Rohit Pathak

    2010-07-01

    Full Text Available We have analyzed the effect of innovations in Nanotechnology on Wireless Sensor Networks (WSN and have modeled Carbon Nanotube (CNT based sensor nodes from a device prospective. A WSN model has been programmed in Simulink-MATLAB and a library has been developed. Integration of CNT in WSN for various modules such as sensors, microprocessors, batteries etc has been shown. Also average energy consumption for the system has been formulated and its reliability has been shown holistically. A proposition has been put forward on the changes needed in existing sensor node structure to improve its efficiency and to facilitate as well as enhance the assimilation of CNT based devices in a WSN. Finally we have commented on the challenges that exist in this technology and described the important factors that need to be considered for calculating reliability. This research will help in practical implementation of CNT based devices and analysis of their key effects on the WSN environment. The work has been executed on Simulink and Distributive Computing toolbox of MATLAB. The proposal has been compared to the recent developments and past experimental results reported in this field. This attempt to derieve the energy consumption and reliability implications will help in development of real devices using CNT which is a major hurdle in bringing the success from lab to commercial market. Recent research in CNT has been used to model an energy efficient model which will also lead to the development CAD tools. Library for Reliability and Energy consumption includes analysis of various parts of a WSN system which is being constructed from CNT. Nano routing in a CNT system is also implemented with its dependencies. Finally the computations were executed on a HPC setup and the model showed remarkable speedup.

  2. Solvation of lithium ion in dimethoxyethane and propylene carbonate

    Science.gov (United States)

    Chaban, Vitaly

    2015-07-01

    Solvation of the lithium ion (Li+) in dimethoxyethane (DME) and propylene carbonate (PC) is of scientific significance and urgency in the context of lithium-ion batteries. I report PM7-MD simulations on the composition of Li+ solvation shells (SH) in a few DME/PC mixtures. The equimolar mixture features preferential solvation by PC, in agreement with classical MD studies. However, one DME molecule is always present in the first SH, supplementing the cage formed by five PC molecules. As PC molecules get removed, DME gradually substitutes vacant places. In the PC-poor mixtures, an entire SH is populated by five DME molecules.

  3. Risk evaluations of aging phenomena: the linear aging reliability model and its extensions

    International Nuclear Information System (INIS)

    Vesely, W.E.

    1987-01-01

    A model for component failure rates due to aging mechanisms has been developed from basic phenomenological considerations. In the treatment, the occurrences of deterioration are modeled as following a Poisson process. The severity of damage is allowed to have any distribution, however the damage is assumed to accumulate independently. Finally, the failure rate is modeled as being proportional to the accumulated damage. Using this treatment, the linear aging failure rate model is obtained. The applicability of the linear aging model to various mechanisms is discussed. The model can be extended to cover nonlinear and dependent aging phenomena. The implementability of the linear aging model is demonstrated by applying it to the aging data collected in NRC's Nuclear Plant Aging Research (NPAR) Program. The applications show that aging as observed in collected data have significant effects on the component failure probability and component reliability when aging is not effectively detected and controlled by testing and maintenance

  4. Risk evaluations of aging phenomena: The linear aging reliability model and its extensions

    International Nuclear Information System (INIS)

    Vesely, W.E.

    1986-01-01

    A model for component failure rates due to aging mechanisms has been developed from basic phenomenological considerations. In the treatment, the occurrences of deterioration are modeled as following a Poisson process. The severity of damage is allowed to have any distribution, however the damage is assumed to accumulate independently. Finally, the failure rate is modeled as being proportional to the accumulated damage. Using this treatment, the linear aging failure rate model is obtained. The applicability of the linear aging model to various mechanisms is discussed. The model can be extended to cover nonlinear and dependent aging phenomena. The implementability of the linear aging model is demonstrated by applying it of the aging data collected in NRC's Nuclear Plant Aging Research (NPAR) Program. The applications show that aging as observed in collected data have significant effects on the component failure probability and component reliability when aging is not effectively detected and controlled by testing and maintenance

  5. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2003-04-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the first step of the 3 year project, and the main researches were focused on identifying the candidate thermal hydraulic models for improvement and to develop prototypical model development. During the current year, the verification calculations submitted for the APR 1400 design certification have been reviewed, the experimental data from the MIDAS DVI experiment facility in KAERI have been analyzed and evaluated, candidate thermal hydraulic models for improvement have been identified, prototypical models for the improved thermal hydraulic models have been developed, items for experiment in connection with the model development have been identified, and preliminary design of the experiment has been carried out.

  6. Development of thermal hydraulic models for the reliable regulatory auditing code

    International Nuclear Information System (INIS)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S.

    2003-04-01

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the first step of the 3 year project, and the main researches were focused on identifying the candidate thermal hydraulic models for improvement and to develop prototypical model development. During the current year, the verification calculations submitted for the APR 1400 design certification have been reviewed, the experimental data from the MIDAS DVI experiment facility in KAERI have been analyzed and evaluated, candidate thermal hydraulic models for improvement have been identified, prototypical models for the improved thermal hydraulic models have been developed, items for experiment in connection with the model development have been identified, and preliminary design of the experiment has been carried out

  7. Modeling and simulation for microelectronic packaging assembly manufacturing, reliability and testing

    CERN Document Server

    Liu, Sheng

    2011-01-01

    Although there is increasing need for modeling and simulation in the IC package design phase, most assembly processes and various reliability tests are still based on the time consuming ""test and try out"" method to obtain the best solution. Modeling and simulation can easily ensure virtual Design of Experiments (DoE) to achieve the optimal solution. This has greatly reduced the cost and production time, especially for new product development. Using modeling and simulation will become increasingly necessary for future advances in 3D package development.  In this book, Liu and Liu allow people

  8. Reliability and Maintainability Model (RAM): User and Maintenance Manual. Part 2; Improved Supportability Analysis

    Science.gov (United States)

    Ebeling, Charles E.

    1996-01-01

    This report documents the procedures for utilizing and maintaining the Reliability & Maintainability Model (RAM) developed by the University of Dayton for the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). The purpose of the grant is to provide support to NASA in establishing operational and support parameters and costs of proposed space systems. As part of this research objective, the model described here was developed. This Manual updates and supersedes the 1995 RAM User and Maintenance Manual. Changes and enhancements from the 1995 version of the model are primarily a result of the addition of more recent aircraft and shuttle R&M data.

  9. Evaluation of seismic reliability of steel moment resisting frames rehabilitated by concentric braces with probabilistic models

    Directory of Open Access Journals (Sweden)

    Fateme Rezaei

    2017-08-01

    Full Text Available Probability of structure failure which has been designed by "deterministic methods" can be more than the one which has been designed in similar situation using probabilistic methods and models considering "uncertainties". The main purpose of this research was to evaluate the seismic reliability of steel moment resisting frames rehabilitated with concentric braces by probabilistic models. To do so, three-story and nine-story steel moment resisting frames were designed based on resistant criteria of Iranian code and then they were rehabilitated based on controlling drift limitations by concentric braces. Probability of frames failure was evaluated by probabilistic models of magnitude, location of earthquake, ground shaking intensity in the area of the structure, probabilistic model of building response (based on maximum lateral roof displacement and probabilistic methods. These frames were analyzed under subcrustal source by sampling probabilistic method "Risk Tools" (RT. Comparing the exceedance probability of building response curves (or selected points on it of the three-story and nine-story model frames (before and after rehabilitation, seismic response of rehabilitated frames, was reduced and their reliability was improved. Also the main effective variables in reducing the probability of frames failure were determined using sensitivity analysis by FORM probabilistic method. The most effective variables reducing the probability of frames failure are  in the magnitude model, ground shaking intensity model error and magnitude model error

  10. Using the graphs models for evaluating in-core monitoring systems reliability by the method of imiting simulaton

    International Nuclear Information System (INIS)

    Golovanov, M.N.; Zyuzin, N.N.; Levin, G.L.; Chesnokov, A.N.

    1987-01-01

    An approach for estimation of reliability factors of complex reserved systems at early stages of development using the method of imitating simulation is considered. Different types of models, their merits and lacks are given. Features of in-core monitoring systems and advosability of graph model and graph theory element application for estimating reliability of such systems are shown. The results of investigation of the reliability factors of the reactor monitoring, control and core local protection subsystem are shown

  11. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    Science.gov (United States)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  12. An artificial neural network for modeling reliability, availability and maintainability of a repairable system

    International Nuclear Information System (INIS)

    Rajpal, P.S.; Shishodia, K.S.; Sekhon, G.S.

    2006-01-01

    The paper explores the application of artificial neural networks to model the behaviour of a complex, repairable system. A composite measure of reliability, availability and maintainability parameters has been proposed for measuring the system performance. The artificial neural network has been trained using past data of a helicopter transportation facility. It is used to simulate behaviour of the facility under various constraints. The insights obtained from results of simulation are useful in formulating strategies for optimal operation of the system

  13. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  14. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano

    2017-01-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  15. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano, E-mail: vasconv@cdtn.br, E-mail: soaresw@cdtn.br, E-mail: raissaomarques@gmail.com, E-mail: silvasf@cdtn.br, E-mail: amandaraso@hotmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  16. A comparison between Markovian models and Bayesian networks for treating some dependent events in reliability evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Duarte, Juliana P.; Leite, Victor C.; Melo, P.F. Frutuoso e, E-mail: julianapduarte@poli.ufrj.br, E-mail: victor.coppo.leite@poli.ufrj.br, E-mail: frutuoso@nuclear.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    Bayesian networks have become a very handy tool for solving problems in various application areas. This paper discusses the use of Bayesian networks to treat dependent events in reliability engineering typically modeled by Markovian models. Dependent events play an important role as, for example, when treating load-sharing systems, bridge systems, common-cause failures, and switching systems (those for which a standby component is activated after the main one fails by means of a switching mechanism). Repair plays an important role in all these cases (as, for example, the number of repairmen). All Bayesian network calculations are performed by means of the Netica™ software, of Norsys Software Corporation, and Fortran 90 to evaluate them over time. The discussion considers the development of time-dependent reliability figures of merit, which are easily obtained, through Markovian models, but not through Bayesian networks, because these latter need probability figures as input and not failure and repair rates. Bayesian networks produced results in very good agreement with those of Markov models and pivotal decomposition. Static and discrete time (DTBN) Bayesian networks were used in order to check their capabilities of modeling specific situations, like switching failures in cold-standby systems. The DTBN was more flexible to modeling systems where the time of occurrence of an event is important, for example, standby failure and repair. However, the static network model showed as good results as DTBN by a much more simplified approach. (author)

  17. A comparison between Markovian models and Bayesian networks for treating some dependent events in reliability evaluations

    International Nuclear Information System (INIS)

    Duarte, Juliana P.; Leite, Victor C.; Melo, P.F. Frutuoso e

    2013-01-01

    Bayesian networks have become a very handy tool for solving problems in various application areas. This paper discusses the use of Bayesian networks to treat dependent events in reliability engineering typically modeled by Markovian models. Dependent events play an important role as, for example, when treating load-sharing systems, bridge systems, common-cause failures, and switching systems (those for which a standby component is activated after the main one fails by means of a switching mechanism). Repair plays an important role in all these cases (as, for example, the number of repairmen). All Bayesian network calculations are performed by means of the Netica™ software, of Norsys Software Corporation, and Fortran 90 to evaluate them over time. The discussion considers the development of time-dependent reliability figures of merit, which are easily obtained, through Markovian models, but not through Bayesian networks, because these latter need probability figures as input and not failure and repair rates. Bayesian networks produced results in very good agreement with those of Markov models and pivotal decomposition. Static and discrete time (DTBN) Bayesian networks were used in order to check their capabilities of modeling specific situations, like switching failures in cold-standby systems. The DTBN was more flexible to modeling systems where the time of occurrence of an event is important, for example, standby failure and repair. However, the static network model showed as good results as DTBN by a much more simplified approach. (author)

  18. Automatic creation of Markov models for reliability assessment of safety instrumented systems

    International Nuclear Information System (INIS)

    Guo Haitao; Yang Xianhui

    2008-01-01

    After the release of new international functional safety standards like IEC 61508, people care more for the safety and availability of safety instrumented systems. Markov analysis is a powerful and flexible technique to assess the reliability measurements of safety instrumented systems, but it is fallible and time-consuming to create Markov models manually. This paper presents a new technique to automatically create Markov models for reliability assessment of safety instrumented systems. Many safety related factors, such as failure modes, self-diagnostic, restorations, common cause and voting, are included in Markov models. A framework is generated first based on voting, failure modes and self-diagnostic. Then, repairs and common-cause failures are incorporated into the framework to build a complete Markov model. Eventual simplification of Markov models can be done by state merging. Examples given in this paper show how explosively the size of Markov model increases as the system becomes a little more complicated as well as the advancement of automatic creation of Markov models

  19. Accounting for Model Uncertainties Using Reliability Methods - Application to Carbon Dioxide Geologic Sequestration System. Final Report

    International Nuclear Information System (INIS)

    Mok, Chin Man; Doughty, Christine; Zhang, Keni; Pruess, Karsten; Kiureghian, Armen; Zhang, Miao; Kaback, Dawn

    2010-01-01

    A new computer code, CALRELTOUGH, which uses reliability methods to incorporate parameter sensitivity and uncertainty analysis into subsurface flow and transport models, was developed by Geomatrix Consultants, Inc. in collaboration with Lawrence Berkeley National Laboratory and University of California at Berkeley. The CALREL reliability code was developed at the University of California at Berkely for geotechnical applications and the TOUGH family of codes was developed at Lawrence Berkeley National Laboratory for subsurface flow and tranport applications. The integration of the two codes provides provides a new approach to deal with uncertainties in flow and transport modeling of the subsurface, such as those uncertainties associated with hydrogeology parameters, boundary conditions, and initial conditions of subsurface flow and transport using data from site characterization and monitoring for conditioning. The new code enables computation of the reliability of a system and the components that make up the system, instead of calculating the complete probability distributions of model predictions at all locations at all times. The new CALRELTOUGH code has tremendous potential to advance subsurface understanding for a variety of applications including subsurface energy storage, nuclear waste disposal, carbon sequestration, extraction of natural resources, and environmental remediation. The new code was tested on a carbon sequestration problem as part of the Phase I project. Phase iI was not awarded.

  20. Reliability modeling and analysis for a novel design of modular converter system of wind turbines

    International Nuclear Information System (INIS)

    Zhang, Cai Wen; Zhang, Tieling; Chen, Nan; Jin, Tongdan

    2013-01-01

    Converters play a vital role in wind turbines. The concept of modularity is gaining in popularity in converter design for modern wind turbines in order to achieve high reliability as well as cost-effectiveness. In this study, we are concerned with a novel topology of modular converter invented by Hjort, Modular converter system with interchangeable converter modules. World Intellectual Property Organization, Pub. No. WO29027520 A2; 5 March 2009, in this architecture, the converter comprises a number of identical and interchangeable basic modules. Each module can operate in either AC/DC or DC/AC mode, depending on whether it functions on the generator or the grid side. Moreover, each module can be reconfigured from one side to the other, depending on the system’s operational requirements. This is a shining example of full-modular design. This paper aims to model and analyze the reliability of such a modular converter. A Markov modeling approach is applied to the system reliability analysis. In particular, six feasible converter system models based on Hjort’s architecture are investigated. Through numerical analyses and comparison, we provide insights and guidance for converter designers in their decision-making.

  1. An Assessment of the VHTR Safety Distance Using the Reliability Physics Model

    International Nuclear Information System (INIS)

    Lee, Joeun; Kim, Jintae; Jae, Moosung

    2015-01-01

    In Korea planning the production of hydrogen using high temperature from nuclear power is in progress. To produce hydrogen from nuclear plants, supplying temperature above 800 .deg. C is required. Therefore, Very High Temperature Reactor (VHTR) which is able to provide about 950 .deg. C is suitable. In situation of high temperature and corrosion where hydrogen might be released easily, hydrogen production facility using VHTR has a danger of explosion. Moreover explosion not only has a bad influence upon facility itself but also on VHTR. Those explosions result in unsafe situation that cause serious damage. However, In terms of thermal-hydraulics view, long distance makes low efficiency Thus, in this study, a methodology for the safety assessment of safety distance between the hydrogen production facilities and the VHTR is developed with reliability physics model. Based on the standard safety criteria which is a value of 1 x 10 -6 , the safety distance between the hydrogen production facilities and the VHTR using reliability physics model are calculated to be a value of 60m - 100m. In the future, assessment for characteristic of VHTR, the capacity to resist pressure from outside hydrogen explosion and the overpressure for the large amount of detonation volume in detail is expected to identify more precise safety distance using this reliability physics model

  2. Improvements to the APBS biomolecular solvation software suite: Improvements to the APBS Software Suite

    Energy Technology Data Exchange (ETDEWEB)

    Jurrus, Elizabeth [Pacific Northwest National Laboratory, Richland Washington; Engel, Dave [Pacific Northwest National Laboratory, Richland Washington; Star, Keith [Pacific Northwest National Laboratory, Richland Washington; Monson, Kyle [Pacific Northwest National Laboratory, Richland Washington; Brandi, Juan [Pacific Northwest National Laboratory, Richland Washington; Felberg, Lisa E. [University of California, Berkeley California; Brookes, David H. [University of California, Berkeley California; Wilson, Leighton [University of Michigan, Ann Arbor Michigan; Chen, Jiahui [Southern Methodist University, Dallas Texas; Liles, Karina [Pacific Northwest National Laboratory, Richland Washington; Chun, Minju [Pacific Northwest National Laboratory, Richland Washington; Li, Peter [Pacific Northwest National Laboratory, Richland Washington; Gohara, David W. [St. Louis University, St. Louis Missouri; Dolinsky, Todd [FoodLogiQ, Durham North Carolina; Konecny, Robert [University of California San Diego, San Diego California; Koes, David R. [University of Pittsburgh, Pittsburgh Pennsylvania; Nielsen, Jens Erik [Protein Engineering, Novozymes A/S, Copenhagen Denmark; Head-Gordon, Teresa [University of California, Berkeley California; Geng, Weihua [Southern Methodist University, Dallas Texas; Krasny, Robert [University of Michigan, Ann Arbor Michigan; Wei, Guo-Wei [Michigan State University, East Lansing Michigan; Holst, Michael J. [University of California San Diego, San Diego California; McCammon, J. Andrew [University of California San Diego, San Diego California; Baker, Nathan A. [Pacific Northwest National Laboratory, Richland Washington; Brown University, Providence Rhode Island

    2017-10-24

    The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve the equations of continuum electrostatics for large biomolecular assemblages that has provided impact in the study of a broad range of chemical, biological, and biomedical applications. APBS addresses three key technology challenges for understanding solvation and electrostatics in biomedical applications: accurate and efficient models for biomolecular solvation and electrostatics, robust and scalable software for applying those theories to biomolecular systems, and mechanisms for sharing and analyzing biomolecular electrostatics data in the scientific community. To address new research applications and advancing computational capabilities, we have continually updated APBS and its suite of accompanying software since its release in 2001. In this manuscript, we discuss the models and capabilities that have recently been implemented within the APBS software package including: a Poisson-Boltzmann analytical and a semi-analytical solver, an optimized boundary element solver, a geometry-based geometric flow solvation model, a graph theory based algorithm for determining pKa values, and an improved web-based visualization tool for viewing electrostatics.

  3. SAMPL4, a blind challenge for computational solvation free energies: the compounds considered

    Science.gov (United States)

    Guthrie, J. Peter

    2014-03-01

    For the fifth time I have provided a set of solvation energies (1 M gas to 1 M aqueous) for a SAMPL challenge. In this set there are 23 blind compounds and 30 supplementary compounds of related structure to one of the blind sets, but for which the solvation energy is readily available. The best current values of each compound are presented along with complete documentation of the experimental origins of the solvation energies. The calculations needed to go from reported data to solvation energies are presented, with particular attention to aspects which are new to this set. For some compounds the vapor pressures (VP) were reported for the liquid compound, which is solid at room temperature. To correct from VPsubcooled liquid to VPsublimation requires ΔSfusion, which is only known for mannitol. Estimated values were used for the others, all but one of which were benzene derivatives and expected to have very similar values. The final compound for which ΔSfusion was estimated was menthol, which melts at 42 °C so that modest errors in ΔSfusion will have little effect. It was also necessary to look into the effects of including estimated values of ΔCp on this correction. The approximate sizes of the effects of inclusion of ΔCp in the correction from VPsubcooled liquid to VPsublimation were estimated and it was noted that inclusion of ΔCp invariably makes ΔGS more positive. To extend the set of compounds for which the solvation energy could be calculated we explored the use of boiling point (b.p.) data from Reaxys/Beilstein as a substitute for studies of the VP as a function of temperature. B.p. data are not always reliable so it was necessary to develop a criterion for rejecting outliers. For two compounds (chlorinated guaiacols) it became clear that inclusion represented overreach; for each there were only two independent pressure, temperature points, which is too little for a trustworthy extrapolation. For a number of compounds the extrapolation from lowest

  4. Testing comparison models of DASS-12 and its reliability among adolescents in Malaysia.

    Science.gov (United States)

    Osman, Zubaidah Jamil; Mukhtar, Firdaus; Hashim, Hairul Anuar; Abdul Latiff, Latiffah; Mohd Sidik, Sherina; Awang, Hamidin; Ibrahim, Normala; Abdul Rahman, Hejar; Ismail, Siti Irma Fadhilah; Ibrahim, Faisal; Tajik, Esra; Othman, Norlijah

    2014-10-01

    The 21-item Depression, Anxiety and Stress Scale (DASS-21) is frequently used in non-clinical research to measure mental health factors among adults. However, previous studies have concluded that the 21 items are not stable for utilization among the adolescent population. Thus, the aims of this study are to examine the structure of the factors and to report on the reliability of the refined version of the DASS that consists of 12 items. A total of 2850 students (aged 13 to 17 years old) from three major ethnic in Malaysia completed the DASS-21. The study was conducted at 10 randomly selected secondary schools in the northern state of Peninsular Malaysia. The study population comprised secondary school students (Forms 1, 2 and 4) from the selected schools. Based on the results of the EFA stage, 12 items were included in a final CFA to test the fit of the model. Using maximum likelihood procedures to estimate the model, the selected fit indices indicated a close model fit (χ(2)=132.94, df=57, p=.000; CFI=.96; RMR=.02; RMSEA=.04). Moreover, significant loadings of all the unstandardized regression weights implied an acceptable convergent validity. Besides the convergent validity of the item, a discriminant validity of the subscales was also evident from the moderate latent factor inter-correlations, which ranged from .62 to .75. The subscale reliability was further estimated using Cronbach's alpha and the adequate reliability of the subscales was obtained (Total=76; Depression=.68; Anxiety=.53; Stress=.52). The new version of the 12-item DASS for adolescents in Malaysia (DASS-12) is reliable and has a stable factor structure, and thus it is a useful instrument for distinguishing between depression, anxiety and stress. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Zero-point energy effects in anion solvation shells.

    Science.gov (United States)

    Habershon, Scott

    2014-05-21

    By comparing classical and quantum-mechanical (path-integral-based) molecular simulations of solvated halide anions X(-) [X = F, Cl, Br and I], we identify an ion-specific quantum contribution to anion-water hydrogen-bond dynamics; this effect has not been identified in previous simulation studies. For anions such as fluoride, which strongly bind water molecules in the first solvation shell, quantum simulations exhibit hydrogen-bond dynamics nearly 40% faster than the corresponding classical results, whereas those anions which form a weakly bound solvation shell, such as iodide, exhibit a quantum effect of around 10%. This observation can be rationalized by considering the different zero-point energy (ZPE) of the water vibrational modes in the first solvation shell; for strongly binding anions, the ZPE of bound water molecules is larger, giving rise to faster dynamics in quantum simulations. These results are consistent with experimental investigations of anion-bound water vibrational and reorientational motion.

  6. Proton solvation and proton transfer in chemical and electrochemical processes

    International Nuclear Information System (INIS)

    Lengyel, S.; Conway, B.E.

    1983-01-01

    This chapter examines the proton solvation and characterization of the H 3 O + ion, proton transfer in chemical ionization processes in solution, continuous proton transfer in conductance processes, and proton transfer in electrode processes. Topics considered include the condition of the proton in solution, the molecular structure of the H 3 O + ion, thermodynamics of proton solvation, overall hydration energy of the proton, hydration of H 3 O + , deuteron solvation, partial molal entropy and volume and the entropy of proton hydration, proton solvation in alcoholic solutions, analogies to electrons in semiconductors, continuous proton transfer in conductance, definition and phenomenology of the unusual mobility of the proton in solution, solvent structure changes in relation to anomalous proton mobility, the kinetics of the proton-transfer event, theories of abnormal proton conductance, and the general theory of the contribution of transfer reactions to overall transport processes

  7. Accelerated Monte Carlo system reliability analysis through machine-learning-based surrogate models of network connectivity

    International Nuclear Information System (INIS)

    Stern, R.E.; Song, J.; Work, D.B.

    2017-01-01

    The two-terminal reliability problem in system reliability analysis is known to be computationally intractable for large infrastructure graphs. Monte Carlo techniques can estimate the probability of a disconnection between two points in a network by selecting a representative sample of network component failure realizations and determining the source-terminal connectivity of each realization. To reduce the runtime required for the Monte Carlo approximation, this article proposes an approximate framework in which the connectivity check of each sample is estimated using a machine-learning-based classifier. The framework is implemented using both a support vector machine (SVM) and a logistic regression based surrogate model. Numerical experiments are performed on the California gas distribution network using the epicenter and magnitude of the 1989 Loma Prieta earthquake as well as randomly-generated earthquakes. It is shown that the SVM and logistic regression surrogate models are able to predict network connectivity with accuracies of 99% for both methods, and are 1–2 orders of magnitude faster than using a Monte Carlo method with an exact connectivity check. - Highlights: • Surrogate models of network connectivity are developed by machine-learning algorithms. • Developed surrogate models can reduce the runtime required for Monte Carlo simulations. • Support vector machine and logistic regressions are employed to develop surrogate models. • Numerical example of California gas distribution network demonstrate the proposed approach. • The developed models have accuracies 99%, and are 1–2 orders of magnitude faster than MCS.

  8. High-Speed Shaft Bearing Loads Testing and Modeling in the NREL Gearbox Reliability Collaborative: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    McNiff, B.; Guo, Y.; Keller, J.; Sethuraman, L.

    2014-12-01

    Bearing failures in the high speed output stage of the gearbox are plaguing the wind turbine industry. Accordingly, the National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) has performed an experimental and theoretical investigation of loads within these bearings. The purpose of this paper is to describe the instrumentation, calibrations, data post-processing and initial results from this testing and modeling effort. Measured HSS torque, bending, and bearing loads are related to model predictions. Of additional interest is examining if the shaft measurements can be simply related to bearing load measurements, eliminating the need for invasive modifications of the bearing races for such instrumentation.

  9. Final Report: System Reliability Model for Solid-State Lighting (SSL) Luminaires

    Energy Technology Data Exchange (ETDEWEB)

    Davis, J. Lynn [RTI International, Research Triangle Park, NC (United States)

    2017-05-31

    The primary objectives of this project was to develop and validate reliability models and accelerated stress testing (AST) methodologies for predicting the lifetime of integrated SSL luminaires. This study examined the likely failure modes for SSL luminaires including abrupt failure, excessive lumen depreciation, unacceptable color shifts, and increased power consumption. Data on the relative distribution of these failure modes were acquired through extensive accelerated stress tests and combined with industry data and other source of information on LED lighting. This data was compiled and utilized to build models of the aging behavior of key luminaire optical and electrical components.

  10. Ultrafast transient-absorption of the solvated electron in water

    International Nuclear Information System (INIS)

    Kimura, Y.; Alfano, J.C.; Walhout, P.K.; Barbara, P.F.

    1994-01-01

    Ultrafast near infrared (NIR)-pump/variable wavelength probe transient-absorption spectroscopy has been performed on the aqueous solvated electron. The photodynamics of the solvated electron excited to its p-state are qualitatively similar to previous measurements of the dynamics of photoinjected electrons at high energy. This result confirms the previous interpretation of photoinjected electron dynamics as having a rate-limiting bottleneck at low energies presumably involving the p-state

  11. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S.; Lee, S. W. [Korea Automic Energy Research Institute, Taejon (Korea, Republic of)

    2004-02-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the second step of the 3 year project, and the main researches were focused on the development of downcorner boiling model. During the current year, the bubble stream model of downcorner has been developed and installed in he auditing code. The model sensitivity analysis has been performed for APR1400 LBLOCA scenario using the modified code. The preliminary calculation has been performed for the experimental test facility using FLUENT and MARS code. The facility for air bubble experiment has been installed. The thermal hydraulic phenomena for VHTR and super critical reactor have been identified for the future application and model development.

  12. Accelerated oxygen-induced retinopathy is a reliable model of ischemia-induced retinal neovascularization.

    Science.gov (United States)

    Villacampa, Pilar; Menger, Katja E; Abelleira, Laura; Ribeiro, Joana; Duran, Yanai; Smith, Alexander J; Ali, Robin R; Luhmann, Ulrich F; Bainbridge, James W B

    2017-01-01

    Retinal ischemia and pathological angiogenesis cause severe impairment of sight. Oxygen-induced retinopathy (OIR) in young mice is widely used as a model to investigate the underlying pathological mechanisms and develop therapeutic interventions. We compared directly the conventional OIR model (exposure to 75% O2 from postnatal day (P) 7 to P12) with an alternative, accelerated version (85% O2 from P8 to P11). We found that accelerated OIR induces similar pre-retinal neovascularization but greater retinal vascular regression that recovers more rapidly. The extent of retinal gliosis is similar but neuroretinal function, as measured by electroretinography, is better maintained in the accelerated model. We found no systemic or maternal morbidity in either model. Accelerated OIR offers a safe, reliable and more rapid alternative model in which pre-retinal neovascularization is similar but retinal vascular regression is greater.

  13. Incorporation of Markov reliability models for digital instrumentation and control systems into existing PRAs

    International Nuclear Information System (INIS)

    Bucci, P.; Mangan, L. A.; Kirschenbaum, J.; Mandelli, D.; Aldemir, T.; Arndt, S. A.

    2006-01-01

    Markov models have the ability to capture the statistical dependence between failure events that can arise in the presence of complex dynamic interactions between components of digital instrumentation and control systems. One obstacle to the use of such models in an existing probabilistic risk assessment (PRA) is that most of the currently available PRA software is based on the static event-tree/fault-tree methodology which often cannot represent such interactions. We present an approach to the integration of Markov reliability models into existing PRAs by describing the Markov model of a digital steam generator feedwater level control system, how dynamic event trees (DETs) can be generated from the model, and how the DETs can be incorporated into an existing PRA with the SAPHIRE software. (authors)

  14. Rating the raters in a mixed model: An approach to deciphering the rater reliability

    Science.gov (United States)

    Shang, Junfeng; Wang, Yougui

    2013-05-01

    Rating the raters has attracted extensive attention in recent years. Ratings are quite complex in that the subjective assessment and a number of criteria are involved in a rating system. Whenever the human judgment is a part of ratings, the inconsistency of ratings is the source of variance in scores, and it is therefore quite natural for people to verify the trustworthiness of ratings. Accordingly, estimation of the rater reliability will be of great interest and an appealing issue. To facilitate the evaluation of the rater reliability in a rating system, we propose a mixed model where the scores of the ratees offered by a rater are described with the fixed effects determined by the ability of the ratees and the random effects produced by the disagreement of the raters. In such a mixed model, for the rater random effects, we derive its posterior distribution for the prediction of random effects. To quantitatively make a decision in revealing the unreliable raters, the predictive influence function (PIF) serves as a criterion which compares the posterior distributions of random effects between the full data and rater-deleted data sets. The benchmark for this criterion is also discussed. This proposed methodology of deciphering the rater reliability is investigated in the multiple simulated and two real data sets.

  15. The reliability of the Hendrich Fall Risk Model in a geriatric hospital.

    Science.gov (United States)

    Heinze, Cornelia; Halfens, Ruud; Dassen, Theo

    2008-12-01

    Aims and objectives.  The purpose of this study was to test the interrater reliability of the Hendrich Fall Risk Model, an instrument to identify patients in a hospital setting with a high risk of falling. Background.  Falls are a serious problem in older patients. Valid and reliable fall risk assessment tools are required to identify high-risk patients and to take adequate preventive measures. Methods.  Seventy older patients were independently and simultaneously assessed by six pairs of raters made up of nursing staff members. Consensus estimates were calculated using simple percentage agreement and consistency estimates using Spearman's rho and intra class coefficient. Results.  Percentage agreement ranged from 0.70 to 0.92 between the six pairs of raters. Spearman's rho coefficients were between 0.54 and 0.80 and the intra class coefficients were between 0.46 and 0.92. Conclusions.  Whereas some pairs of raters obtained considerable interobserver agreement and internal consistency, the others did not. Therefore, it is concluded that the Hendrich Fall Risk Model is not a reliable instrument. The use of more unambiguous operationalized items is preferred. Relevance to clinical practice.  In practice, well operationalized fall risk assessment tools are necessary. Observer agreement should always be investigated after introducing a standardized measurement tool. © 2008 The Authors. Journal compilation © 2008 Blackwell Publishing Ltd.

  16. Skill and reliability of climate model ensembles at the Last Glacial Maximum and mid-Holocene

    Directory of Open Access Journals (Sweden)

    J. C. Hargreaves

    2013-03-01

    Full Text Available Paleoclimate simulations provide us with an opportunity to critically confront and evaluate the performance of climate models in simulating the response of the climate system to changes in radiative forcing and other boundary conditions. Hargreaves et al. (2011 analysed the reliability of the Paleoclimate Modelling Intercomparison Project, PMIP2 model ensemble with respect to the MARGO sea surface temperature data synthesis (MARGO Project Members, 2009 for the Last Glacial Maximum (LGM, 21 ka BP. Here we extend that work to include a new comprehensive collection of land surface data (Bartlein et al., 2011, and introduce a novel analysis of the predictive skill of the models. We include output from the PMIP3 experiments, from the two models for which suitable data are currently available. We also perform the same analyses for the PMIP2 mid-Holocene (6 ka BP ensembles and available proxy data sets. Our results are predominantly positive for the LGM, suggesting that as well as the global mean change, the models can reproduce the observed pattern of change on the broadest scales, such as the overall land–sea contrast and polar amplification, although the more detailed sub-continental scale patterns of change remains elusive. In contrast, our results for the mid-Holocene are substantially negative, with the models failing to reproduce the observed changes with any degree of skill. One cause of this problem could be that the globally- and annually-averaged forcing anomaly is very weak at the mid-Holocene, and so the results are dominated by the more localised regional patterns in the parts of globe for which data are available. The root cause of the model-data mismatch at these scales is unclear. If the proxy calibration is itself reliable, then representativity error in the data-model comparison, and missing climate feedbacks in the models are other possible sources of error.

  17. Solution thermodynamics and preferential solvation of sulfamethazine in (methanol + water) mixtures

    International Nuclear Information System (INIS)

    Delgado, Daniel R.; Almanza, Ovidio A.; Martínez, Fleming; Peña, María A.; Jouyban, Abolghasem; Acree, William E.

    2016-01-01

    Highlights: • Solubility of sulfamethazine (SMT) was measured in (methanol + water) mixtures. • SMT solubility was correlated with Jouyban–Acree model. • Gibbs energy, enthalpy, and entropy of dissolution of SMT were calculated. • Non-linear enthalpy–entropy relationship was observed for SMT. • Preferential solvation of SMT by methanol was analyzed by using the IKBI method. - Abstract: The solubility of sulfamethazine (SMT) in {methanol (1) + water (2)} co-solvent mixtures was determined at five different temperatures from (293.15 to 313.15) K. The sulfonamide exhibited its highest mole fraction solubility in pure methanol (δ 1 = 29.6 MPa 1/2 ) and its lowest mole fraction solubility in water (δ 2 = 47.8 MPa 1/2 ) at each of the five temperatures studied. The Jouyban–Acree model was used to correlate/predict the solubility values. The respective apparent thermodynamic functions Gibbs energy, enthalpy, and entropy of solution were obtained from the solubility data through the van’t Hoff and Gibbs equations. Apparent thermodynamic quantities of mixing were also calculated for this drug using values of the ideal solubility reported in the literature. A non-linear enthalpy–entropy relationship was noted for SMT in plots of both the enthalpy vs. Gibbs energy of mixing and the enthalpy vs. entropy of mixing. These plots suggest two different trends according to the slopes obtained when the composition of the mixtures changes. Accordingly, the mechanism for SMT transfer processes in water-rich mixtures from water to the mixture with 0.70 in mass fraction of methanol is entropy driven. Conversely, the mechanism is enthalpy driven in mixtures whenever the methanol composition exceeds 0.70 mol fraction. An inverse Kirkwood–Buff integral analysis of the preferential solvation of SMT indicated that the drug is preferentially solvated by water in water-rich mixtures but is preferentially solvated by methanol in methanol-rich mixtures.

  18. Reliability modeling of a hard real-time system using the path-space approach

    International Nuclear Information System (INIS)

    Kim, Hagbae

    2000-01-01

    A hard real-time system, such as a fly-by-wire system, fails catastrophically (e.g. losing stability) if its control inputs are not updated by its digital controller computer within a certain timing constraint called the hard deadline. To assess and validate those systems' reliabilities by using a semi-Markov model that explicitly contains the deadline information, we propose a path-space approach deriving the upper and lower bounds of the probability of system failure. These bounds are derived by using only simple parameters, and they are especially suitable for highly reliable systems which should recover quickly. Analytical bounds are derived for both exponential and Wobble failure distributions encountered commonly, which have proven effective through numerical examples, while considering three repair strategies: repair-as-good-as-new, repair-as-good-as-old, and repair-better-than-old

  19. Modeling Parameters of Reliability of Technological Processes of Hydrocarbon Pipeline Transportation

    Directory of Open Access Journals (Sweden)

    Shalay Viktor

    2016-01-01

    Full Text Available On the basis of methods of system analysis and parametric reliability theory, the mathematical modeling of processes of oil and gas equipment operation in reliability monitoring was conducted according to dispatching data. To check the quality of empiric distribution coordination , an algorithm and mathematical methods of analysis are worked out in the on-line mode in a changing operating conditions. An analysis of physical cause-and-effect relations mechanism between the key factors and changing parameters of technical systems of oil and gas facilities is made, the basic types of technical distribution parameters are defined. Evaluation of the adequacy the analyzed parameters of the type of distribution is provided by using a criterion A.Kolmogorov, as the most universal, accurate and adequate to verify the distribution of continuous processes of complex multiple-technical systems. Methods of calculation are provided for supervising by independent bodies for risk assessment and safety facilities.

  20. An auto-focusing heuristic model to increase the reliability of a scientific mission

    International Nuclear Information System (INIS)

    Gualdesi, Lavinio

    2006-01-01

    Researchers invest a lot of time and effort on the design and development of components used in a scientific mission. To capitalize on this investment and on the operational experience of the researchers, it is useful to adopt a quantitative data base to monitor the history and usage of the components. This work describes a model to monitor the reliability level of components. The model is very flexible and allows users to compose systems using the same components in different configurations as required by each mission. This tool provides availability and reliability figures for the configuration requested, derived from historical data of the components' previous performance. The system is based on preliminary checklists to establish standard operating procedures (SOP) for all components life phases. When an infringement to the SOP occurs, a quantitative ranking is provided in order to quantify the risk associated with this deviation. The final agreement between field data and expected performance of the component makes the model converge onto a heuristic monitoring system. The model automatically focuses on points of failure at the detailed component element level, calculates risks, provides alerts when a demonstrated risk to safety is encountered, and advises when there is a mismatch between component performance and mission requirements. This model also helps the mission to focus resources on critical tasks where they are most needed

  1. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In the nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model to Dynamic Safety System(DDS) shows that the estimated reliability of the system is quite reasonable and realistic

  2. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model of dynamic safety system (DSS) shows that the estimated reliability of the system is quite reasonable and realistic. (author)

  3. Estimation of abraham solvation equation coefficients for hydrogen bond formation from abraham solvation parameters for solute activity and basicity

    NARCIS (Netherlands)

    Noort, van P.C.M.

    2013-01-01

    Abraham solvation equations find widespread use in environmental chemistry and pharmaco-chemistry. The coefficients in these equations, which are solvent (system) descriptors, are usually determined by fitting experimental data. To simplify the determination of these coefficients in Abraham

  4. Reliability of a Novel Model for Drug Release from 2D HPMC-Matrices

    Directory of Open Access Journals (Sweden)

    Rumiana Blagoeva

    2010-04-01

    Full Text Available A novel model of drug release from 2D-HPMC matrices is considered. Detailed mathematical description of matrix swelling and the effect of the initial drug loading are introduced. A numerical approach to solution of the posed nonlinear 2D problem is used on the basis of finite element domain approximation and time difference method. The reliability of the model is investigated in two steps: numerical evaluation of the water uptake parameters; evaluation of drug release parameters under available experimental data. The proposed numerical procedure for fitting the model is validated performing different numerical examples of drug release in two cases (with and without taking into account initial drug loading. The goodness of fit evaluated by the coefficient of determination is presented to be very good with few exceptions. The obtained results show better model fitting when accounting the effect of initial drug loading (especially for larger values.

  5. Modeling the reliability and maintenance costs of wind turbines using Weibull analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vachon, W.A. [W.A. Vachon & Associates, Inc., Manchester, MA (United States)

    1996-12-31

    A general description is provided of the basic mathematics and use of Weibull statistical models for modeling component failures and maintenance costs as a function of time. The applicability of the model to wind turbine components and subsystems is discussed with illustrative examples of typical component reliabilities drawn from actual field experiences. Example results indicate the dominant role of key subsystems based on a combination of their failure frequency and repair/replacement costs. The value of the model is discussed as a means of defining (1) maintenance practices, (2) areas in which to focus product improvements, (3) spare parts inventory, and (4) long-term trends in maintenance costs as an important element in project cash flow projections used by developers, investors, and lenders. 6 refs., 8 figs., 3 tabs.

  6. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    DEFF Research Database (Denmark)

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.

    2017-01-01

    orders of magnitude. Data values also have greatly varying magnitudes. Standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME......Constraint-Based Reconstruction and Analysis (COBRA) is currently the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many...... models have 70,000 constraints and variables and will grow larger). We have developed a quadrupleprecision version of our linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging...

  7. Using Evidence Credibility Decay Model for dependence assessment in human reliability analysis

    International Nuclear Information System (INIS)

    Guo, Xingfeng; Zhou, Yanhui; Qian, Jin; Deng, Yong

    2017-01-01

    Highlights: • A new computational model is proposed for dependence assessment in HRA. • We combined three factors of “CT”, “TR” and “SP” within Dempster–Shafer theory. • The BBA of “SP” is reconstructed by discounting rate based on the ECDM. • Simulation experiments are illustrated to show the efficiency of the proposed method. - Abstract: Dependence assessment among human errors plays an important role in human reliability analysis. When dependence between two sequent tasks exists in human reliability analysis, if the preceding task fails, the failure probability of the following task is higher than success. Typically, three major factors are considered: “Closeness in Time” (CT), “Task Relatedness” (TR) and “Similarity of Performers” (SP). Assume TR is not changed, both SP and CT influence the degree of dependence level and SP is discounted by the time as the result of combine two factors in this paper. In this paper, a new computational model is proposed based on the Dempster–Shafer Evidence Theory (DSET) and Evidence Credibility Decay Model (ECDM) to assess the dependence between tasks in human reliability analysis. First, the influenced factors among human tasks are identified and the basic belief assignments (BBAs) of each factor are constructed based on expert evaluation. Then, the BBA of SP is discounted as the result of combining two factors and reconstructed by using the ECDM, the factors are integrated into a fused BBA. Finally, the dependence level is calculated based on fused BBA. Experimental results demonstrate that the proposed model not only quantitatively describe the fact that the input factors influence the dependence level, but also exactly show how the dependence level regular changes with different situations of input factors.

  8. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  9. Algorithms for Bayesian network modeling and reliability assessment of infrastructure systems

    International Nuclear Information System (INIS)

    Tien, Iris; Der Kiureghian, Armen

    2016-01-01

    Novel algorithms are developed to enable the modeling of large, complex infrastructure systems as Bayesian networks (BNs). These include a compression algorithm that significantly reduces the memory storage required to construct the BN model, and an updating algorithm that performs inference on compressed matrices. These algorithms address one of the major obstacles to widespread use of BNs for system reliability assessment, namely the exponentially increasing amount of information that needs to be stored as the number of components in the system increases. The proposed compression and inference algorithms are described and applied to example systems to investigate their performance compared to that of existing algorithms. Orders of magnitude savings in memory storage requirement are demonstrated using the new algorithms, enabling BN modeling and reliability analysis of larger infrastructure systems. - Highlights: • Novel algorithms developed for Bayesian network modeling of infrastructure systems. • Algorithm presented to compress information in conditional probability tables. • Updating algorithm presented to perform inference on compressed matrices. • Algorithms applied to example systems to investigate their performance. • Orders of magnitude savings in memory storage requirement demonstrated.

  10. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    Science.gov (United States)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-06-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  11. The cognitive environment simulation as a tool for modeling human performance and reliability

    International Nuclear Information System (INIS)

    Woods, D.D.; Pople, H. Jr.; Roth, E.M.

    1990-01-01

    The US Nuclear Regulatory Commission is sponsoring a research program to develop improved methods to model the cognitive behavior of nuclear power plant (NPP) personnel. Under this program, a tool for simulating how people form intentions to act in NPP emergency situations was developed using artificial intelligence (AI) techniques. This tool is called Cognitive Environment Simulation (CES). The Cognitive Reliability Assessment Technique (or CREATE) was also developed to specify how CBS can be used to enhance the measurement of the human contribution to risk in probabilistic risk assessment (PRA) studies. The next step in the research program was to evaluate the modeling tool and the method for using the tool for Human Reliability Analysis (HRA) in PRAs. Three evaluation activities were conducted. First, a panel of highly distinguished experts in cognitive modeling, AI, PRA and HRA provided a technical review of the simulation development work. Second, based on panel recommendations, CES was exercised on a family of steam generator tube rupture incidents where empirical data on operator performance already existed. Third, a workshop with HRA practitioners was held to analyze a worked example of the CREATE method to evaluate the role of CES/CREATE in HRA. The results of all three evaluations indicate that CES/CREATE represents a promising approach to modeling operator intention formation during emergency operations

  12. Molecular hydrogen solvated in water – A computational study

    International Nuclear Information System (INIS)

    Śmiechowski, Maciej

    2015-01-01

    The aqueous hydrogen molecule is studied with molecular dynamics simulations at ambient temperature and pressure conditions, using a newly developed flexible and polarizable H 2 molecule model. The design and implementation of this model, compatible with an existing flexible and polarizable force field for water, is presented in detail. The structure of the hydration layer suggests that first-shell water molecules accommodate the H 2 molecule without major structural distortions and two-dimensional, radial-angular distribution functions indicate that as opposed to strictly tangential, the orientation of these water molecules is such that the solute is solvated with one of the free electron pairs of H 2 O. The calculated self-diffusion coefficient of H 2 (aq) agrees very well with experimental results and the time dependence of mean square displacement suggests the presence of caging on a time scale corresponding to hydrogen bond network vibrations in liquid water. Orientational correlation function of H 2 experiences an extremely short-scale decay, making the H 2 –H 2 O interaction potential essentially isotropic by virtue of rotational averaging. The inclusion of explicit polarizability in the model allows for the calculation of Raman spectra that agree very well with available experimental data on H 2 (aq) under differing pressure conditions, including accurate reproduction of the experimentally noted trends with solute pressure or concentration

  13. Reliability of a new biokinetic model of zirconium in internal dosimetry: part II, parameter sensitivity analysis.

    Science.gov (United States)

    Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph

    2011-12-01

    The reliability of biokinetic models is essential for the assessment of internal doses and a radiation risk analysis for the public and occupational workers exposed to radionuclides. In the present study, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. In the first part of the paper, the parameter uncertainty was analyzed for two biokinetic models of zirconium (Zr); one was reported by the International Commission on Radiological Protection (ICRP), and one was developed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU). In the second part of the paper, the parameter uncertainties and distributions of the Zr biokinetic models evaluated in Part I are used as the model inputs for identifying the most influential parameters in the models. Furthermore, the most influential model parameter on the integral of the radioactivity of Zr over 50 y in source organs after ingestion was identified. The results of the systemic HMGU Zr model showed that over the first 10 d, the parameters of transfer rates between blood and other soft tissues have the largest influence on the content of Zr in the blood and the daily urinary excretion; however, after day 1,000, the transfer rate from bone to blood becomes dominant. For the retention in bone, the transfer rate from blood to bone surfaces has the most influence out to the endpoint of the simulation; the transfer rate from blood to the upper larger intestine contributes a lot in the later days; i.e., after day 300. The alimentary tract absorption factor (fA) influences mostly the integral of radioactivity of Zr in most source organs after ingestion.

  14. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  15. Water-enhanced solvation of organics

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jane H. [Univ. of California, Berkeley, CA (United States)

    1993-07-01

    Water-enhanced solvation (WES) was explored for Lewis acid solutes in Lewis base organic solvents, to develop cheap extract regeneration processes. WES for solid solutes was determined from ratios of solubilities of solutes in water-sat. and low-water solvent; both were determined from solid-liquid equilibrium. Vapor-headspace analysis was used to determine solute activity coefficients as function of organic phase water concentration. WES magnitudes of volatile solutes were normalized, set equal to slope of log γs vs xw/xs curve. From graph shape Δ(log γs) represents relative change in solute activity coefficient. Solutes investigated by vapor-headspace analysis were acetic acid, propionic acid, ethanol, 1,2-propylene glycol, 2,3-butylene glycol. Monocarboxylic acids had largest decrease in activity coefficient with water addition followed by glycols and alcohols. Propionic acid in cyclohexanone showed greatest water-enhancement Δ(log γacid)/Δ(xw/xacid) = -0.25. In methylcyclohexanone, the decrease of the activity coefficient of propionic acid was -0.19. Activity coefficient of propionic acid in methylcyclohexanone stopped decreasing once the water reached a 2:1 water to acid mole ratio, implying a stoichiometric relation between water, ketone, and acid. Except for 2,3-butanediol, activity coefficients of the solutes studied decreased monotonically with water content. Activity coefficient curves of ethanol, 1,2-propanediol and 2,3-butanediol did not level off at large water/solute mole ratio. Solutes investigated by solid-liquid equilibrium were citric acid, gallic acid, phenol, xylenols, 2-naphthol. Saturation concentration of citric acid in anhydrous butyl acetate increased from 0.0009 to 0.087 mol/L after 1.3 % (g/g) water co-dissolved into organic phase. Effect of water-enhanced solvation for citric acid is very large but very small for phenol and its derivatives.

  16. Solvation behavior of carbonate-based electrolytes in sodium ion batteries.

    Science.gov (United States)

    Cresce, Arthur V; Russell, Selena M; Borodin, Oleg; Allen, Joshua A; Schroeder, Marshall A; Dai, Michael; Peng, Jing; Gobet, Mallory P; Greenbaum, Steven G; Rogers, Reginald E; Xu, Kang

    2016-12-21

    Sodium ion batteries are on the cusp of being a commercially available technology. Compared to lithium ion batteries, sodium ion batteries can potentially offer an attractive dollar-per-kilowatt-hour value, though at the penalty of reduced energy density. As a materials system, sodium ion batteries present a unique opportunity to apply lessons learned in the study of electrolytes for lithium ion batteries; specifically, the behavior of the sodium ion in an organic carbonate solution and the relationship of ion solvation with electrode surface passivation. In this work the Li + and Na + -based solvates were characterized using electrospray mass spectrometry, infrared and Raman spectroscopy, 17 O, 23 Na and pulse field gradient double-stimulated-echo pulse sequence nuclear magnetic resonance (NMR), and conductivity measurements. Spectroscopic evidence demonstrate that the Li + and Na + cations share a number of similar ion-solvent interaction trends, such as a preference in the gas and liquid phase for a solvation shell rich in cyclic carbonates over linear carbonates and fluorinated carbonates. However, quite different IR spectra due to the PF 6 - anion interactions with the Na + and Li + cations were observed and were rationalized with the help of density functional theory (DFT) calculations that were also used to examine the relative free energies of solvates using cluster - continuum models. Ion-solvent distances for Na + were longer than Li + , and Na + had a greater tendency towards forming contact pairs compared to Li + in linear carbonate solvents. In tests of hard carbon Na-ion batteries, performance was not well correlated to Na + solvent preference, leading to the possibility that Na + solvent preference may play a reduced role in the passivation of anode surfaces and overall Na-ion battery performance.

  17. The role of solvation in the binding selectivity of the L-type calcium channel.

    Science.gov (United States)

    Boda, Dezső; Henderson, Douglas; Gillespie, Dirk

    2013-08-07

    We present grand canonical Monte Carlo simulation results for a reduced model of the L-type calcium channel. While charged residues of the protein amino acids in the selectivity filter are treated explicitly, most of the degrees of freedom (including the rest of the protein and the solvent) are represented by their dielectric response, i.e., dielectric continua. The new aspect of this paper is that the dielectric coefficient in the channel is different from that in the baths. The ions entering the channel, thus, cross a dielectric boundary at the entrance of the channel. Simulating this case has been made possible by our recent methodological development [D. Boda, D. Henderson, B. Eisenberg, and D. Gillespie, J. Chem. Phys. 135, 064105 (2011)]. Our main focus is on the effect of solvation energy (represented by the Born energy) on monovalent vs. divalent ion selectivity in the channel. We find no significant change in selectivity by changing the dielectric coefficient in the channel because the larger solvation penalty is counterbalanced by the enhanced Coulomb attraction inside the channel as soon as we use the Born radii (fitted to experimental hydration energies) to compute the solvation penalty from the Born equation.

  18. Ejection of solvated ions from electrosprayed methanol/water nanodroplets studied by molecular dynamics simulations.

    Science.gov (United States)

    Ahadi, Elias; Konermann, Lars

    2011-06-22

    The ejection of solvated small ions from nanometer-sized droplets plays a central role during electrospray ionization (ESI). Molecular dynamics (MD) simulations can provide insights into the nanodroplet behavior. Earlier MD studies have largely focused on aqueous systems, whereas most practical ESI applications involve the use of organic cosolvents. We conduct simulations on mixed water/methanol droplets that carry excess NH(4)(+) ions. Methanol is found to compromise the H-bonding network, resulting in greatly increased rates of ion ejection and solvent evaporation. Considerable differences in the water and methanol escape rates cause time-dependent changes in droplet composition. Segregation occurs at low methanol concentration, such that layered droplets with a methanol-enriched periphery are formed. This phenomenon will enhance the partitioning of analyte molecules, with possible implications for their ESI efficiencies. Solvated ions are ejected from the tip of surface protrusions. Solvent bridging prior to ion secession is more extensive for methanol/water droplets than for purely aqueous systems. The ejection of solvated NH(4)(+) is visualized as diffusion-mediated escape from a metastable basin. The process involves thermally activated crossing of a ~30 kJ mol(-1) free energy barrier, in close agreement with the predictions of the classical ion evaporation model.

  19. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    Science.gov (United States)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2017-06-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  20. Structural aspects of the solvation shell of lysine and acetylated lysine: A Car-Parrinello and classical molecular dynamics investigation

    International Nuclear Information System (INIS)

    Carnevale, V.; Raugei, S.

    2009-01-01

    Lysine acetylation is a post-translational modification, which modulates the affinity of protein-protein and/or protein-DNA complexes. Its crucial role as a switch in signaling pathways highlights the relevance of charged chemical groups in determining the interactions between water and biomolecules. A great effort has been recently devoted to assess the reliability of classical molecular dynamics simulations in describing the solvation properties of charged moieties. In the spirit of these investigations, we performed classical and Car-Parrinello molecular dynamics simulations on lysine and acetylated-lysine in aqueous solution. A comparative analysis between the two computational schemes is presented with a focus on the first solvation shell of the charged groups. An accurate structural analysis unveils subtle, yet statistically significant, differences which are discussed in connection to the significant electronic density charge transfer occurring between the solute and the surrounding water molecules.

  1. Reliability Analysis of a Composite Wind Turbine Blade Section Using the Model Correction Factor Method: Numerical Study and Validation

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian

    2013-01-01

    by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...

  2. The constant failure rate model for fault tree evaluation as a tool for unit protection reliability assessment

    International Nuclear Information System (INIS)

    Vichev, S.; Bogdanov, D.

    2000-01-01

    The purpose of this paper is to introduce the fault tree analysis method as a tool for unit protection reliability estimation. The constant failure rate model applies for making reliability assessment, and especially availability assessment. For that purpose an example for unit primary equipment structure and fault tree example for simplified unit protection system is presented (author)

  3. Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models

    Directory of Open Access Journals (Sweden)

    Qixuan Bi

    2017-06-01

    Full Text Available In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms, we propose an approximate maximum likelihood estimator. The asymptotic confidence interval and two bootstrap intervals are obtained. Using the Gibbs sampling technique, Bayesian estimator and the corresponding credible interval are obtained. The Metropolis-Hastings algorithm is used to generate random variates. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed.

  4. Validated Loads Prediction Models for Offshore Wind Turbines for Enhanced Component Reliability

    DEFF Research Database (Denmark)

    Koukoura, Christina

    To improve the reliability of offshore wind turbines, accurate prediction of their response is required. Therefore, validation of models with site measurements is imperative. In the present thesis a 3.6MW pitch regulated-variable speed offshore wind turbine on a monopole foundation is built...... are used for the modification of the sub-structure/foundation design for possible material savings. First, the background of offshore wind engineering, including wind-wave conditions, support structure, blade loading and wind turbine dynamics are presented. Second, a detailed description of the site...

  5. A new lifetime estimation model for a quicker LED reliability prediction

    Science.gov (United States)

    Hamon, B. H.; Mendizabal, L.; Feuillet, G.; Gasse, A.; Bataillou, B.

    2014-09-01

    LED reliability and lifetime prediction is a key point for Solid State Lighting adoption. For this purpose, one hundred and fifty LEDs have been aged for a reliability analysis. LEDs have been grouped following nine current-temperature stress conditions. Stress driving current was fixed between 350mA and 1A and ambient temperature between 85C and 120°C. Using integrating sphere and I(V) measurements, a cross study of the evolution of electrical and optical characteristics has been done. Results show two main failure mechanisms regarding lumen maintenance. The first one is the typically observed lumen depreciation and the second one is a much more quicker depreciation related to an increase of the leakage and non radiative currents. Models of the typical lumen depreciation and leakage resistance depreciation have been made using electrical and optical measurements during the aging tests. The combination of those models allows a new method toward a quicker LED lifetime prediction. These two models have been used for lifetime predictions for LEDs.

  6. Reliability of Current Biokinetic and Dosimetric Models for Radionuclides: A Pilot Study

    Energy Technology Data Exchange (ETDEWEB)

    Leggett, Richard Wayne [ORNL; Eckerman, Keith F [ORNL; Meck, Robert A. [U.S. Nuclear Regulatory Commission

    2008-10-01

    This report describes the results of a pilot study of the reliability of the biokinetic and dosimetric models currently used by the U.S. Nuclear Regulatory Commission (NRC) as predictors of dose per unit internal or external exposure to radionuclides. The study examines the feasibility of critically evaluating the accuracy of these models for a comprehensive set of radionuclides of concern to the NRC. Each critical evaluation would include: identification of discrepancies between the models and current databases; characterization of uncertainties in model predictions of dose per unit intake or unit external exposure; characterization of variability in dose per unit intake or unit external exposure; and evaluation of prospects for development of more accurate models. Uncertainty refers here to the level of knowledge of a central value for a population, and variability refers to quantitative differences between different members of a population. This pilot study provides a critical assessment of models for selected radionuclides representing different levels of knowledge of dose per unit exposure. The main conclusions of this study are as follows: (1) To optimize the use of available NRC resources, the full study should focus on radionuclides most frequently encountered in the workplace or environment. A list of 50 radionuclides is proposed. (2) The reliability of a dose coefficient for inhalation or ingestion of a radionuclide (i.e., an estimate of dose per unit intake) may depend strongly on the specific application. Multiple characterizations of the uncertainty in a dose coefficient for inhalation or ingestion of a radionuclide may be needed for different forms of the radionuclide and different levels of information of that form available to the dose analyst. (3) A meaningful characterization of variability in dose per unit intake of a radionuclide requires detailed information on the biokinetics of the radionuclide and hence is not feasible for many infrequently

  7. MAPPS (Maintenance Personnel Performance Simulation): a computer simulation model for human reliability analysis

    International Nuclear Information System (INIS)

    Knee, H.E.; Haas, P.M.

    1985-01-01

    A computer model has been developed, sensitivity tested, and evaluated capable of generating reliable estimates of human performance measures in the nuclear power plant (NPP) maintenance context. The model, entitled MAPPS (Maintenance Personnel Performance Simulation), is of the simulation type and is task-oriented. It addresses a number of person-machine, person-environment, and person-person variables and is capable of providing the user with a rich spectrum of important performance measures including mean time for successful task performance by a maintenance team and maintenance team probability of task success. These two measures are particularly important for input to probabilistic risk assessment (PRA) studies which were the primary impetus for the development of MAPPS. The simulation nature of the model along with its generous input parameters and output variables allows its usefulness to extend beyond its input to PRA

  8. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  9. Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models

    Science.gov (United States)

    Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.

    2017-12-01

    While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API

  10. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  11. [Reliability of three dimensional resin model by rapid prototyping manufacturing and digital modeling].

    Science.gov (United States)

    Zeng, Fei-huang; Xu, Yuan-zhi; Fang, Li; Tang, Xiao-shan

    2012-02-01

    To describe a new technique for fabricating an 3D resin model by 3D reconstruction and rapid prototyping, and to analyze the precision of this method. An optical grating scanner was used to acquire the data of silastic cavity block , digital dental cast was reconstructed with the data through Geomagic Studio image processing software. The final 3D reconstruction was saved in the pattern of Stl. The 3D resin model was fabricated by fuse deposition modeling, and was compared with the digital model and gypsum model. The data of three groups were statistically analyzed using SPSS 16.0 software package. No significant difference was found in gypsum model,digital dental cast and 3D resin model (P>0.05). Rapid prototyping manufacturing and digital modeling would be helpful for dental information acquisition, treatment design, appliance manufacturing, and can improve the communications between patients and doctors.

  12. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  13. Reliability and validity of Champion's Health Belief Model Scale for breast cancer screening among Malaysian women.

    Science.gov (United States)

    Parsa, P; Kandiah, M; Mohd Nasir, M T; Hejar, A R; Nor Afiah, M Z

    2008-11-01

    Breast cancer is the leading cause of cancer deaths in Malaysian women, and the use of breast self-examination (BSE), clinical breast examination (CBE) and mammography remain low in Malaysia. Therefore, there is a need to develop a valid and reliable tool to measure the beliefs that influence breast cancer screening practices. The Champion's Health Belief Model Scale (CHBMS) is a valid and reliable tool to measure beliefs about breast cancer and screening methods in the Western culture. The purpose of this study was to translate the use of CHBMS into the Malaysian context and validate the scale among Malaysian women. A random sample of 425 women teachers was taken from 24 secondary schools in Selangor state, Malaysia. The CHBMS was translated into the Malay language, validated by an expert's panel, back translated, and pretested. Analyses included descriptive statistics of all the study variables, reliability estimates, and construct validity using factor analysis. The mean age of the respondents was 37.2 (standard deviation 7.1) years. Factor analysis yielded ten factors for BSE with eigenvalue greater than 1 (four factors more than the original): confidence 1 (ability to differentiate normal and abnormal changes in the breasts), barriers to BSE, susceptibility for breast cancer, benefits of BSE, health motivation 1 (general health), seriousness 1 (fear of breast cancer), confidence 2 (ability to detect size of lumps), seriousness 2 (fear of long-term effects of breast cancer), health motivation 2 (preventive health practice), and confidence 3 (ability to perform BSE correctly). For CBE and mammography scales, seven factors each were identified. Factors for CBE scale include susceptibility, health motivation 1, benefits of CBE, seriousness 1, barriers of CBE, seriousness 2 and health motivation 2. For mammography the scale includes benefits of mammography, susceptibility, health motivation 1, seriousness 1, barriers to mammography seriousness 2 and health

  14. Identifying the effects of parameter uncertainty on the reliability of riverbank stability modelling

    Science.gov (United States)

    Samadi, A.; Amiri-Tokaldany, E.; Darby, S. E.

    2009-05-01

    Bank retreat is a key process in fluvial dynamics affecting a wide range of physical, ecological and socioeconomic issues in the fluvial environment. To predict the undesirable effects of bank retreat and to inform effective measures to prevent it, a wide range of bank stability models have been presented in the literature. These models typically express bank stability by defining a factor of safety as the ratio of driving and resisting forces acting on the incipient failure block. These forces are affected by a range of controlling factors that include such aspects as the bank profile (bank height and angle), the geotechnical properties of the bank materials, as well as the hydrological status of the riverbanks. In this paper we evaluate the extent to which uncertainties in the parameterization of these controlling factors feed through to influence the reliability of the resulting bank stability estimate. This is achieved by employing a simple model of riverbank stability with respect to planar failure (which is the most common type of bank stability model) in a series of sensitivity tests and Monte Carlo analyses to identify, for each model parameter, the range of values that induce significant changes in the simulated factor of safety. These identified parameter value ranges are compared to empirically derived parameter uncertainties to determine whether they are likely to confound the reliability of the resulting bank stability calculations. Our results show that parameter uncertainties are typically high enough that the likelihood of generating unreliable predictions is typically very high (> ˜ 80% for predictions requiring a precision of < ± 15%). Because parameter uncertainties are derived primarily from the natural variability of the parameters, rather than measurement errors, much more careful attention should be paid to field sampling strategies, such that the parameter uncertainties and consequent prediction unreliabilities can be quantified more

  15. Observation Likelihood Model Design and Failure Recovery Scheme toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2011-01-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  16. Observation Likelihood Model Design and Failure Recovery Scheme Toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2010-12-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  17. Modeling and reliability analysis of three phase z-source AC-AC converter

    Directory of Open Access Journals (Sweden)

    Prasad Hanuman

    2017-12-01

    Full Text Available This paper presents the small signal modeling using the state space averaging technique and reliability analysis of a three-phase z-source ac-ac converter. By controlling the shoot-through duty ratio, it can operate in buck-boost mode and maintain desired output voltage during voltage sag and surge condition. It has faster dynamic response and higher efficiency as compared to the traditional voltage regulator. Small signal analysis derives different control transfer functions and this leads to design a suitable controller for a closed loop system during supply voltage variation. The closed loop system of the converter with a PID controller eliminates the transients in output voltage and provides steady state regulated output. The proposed model designed in the RT-LAB and executed in a field programming gate array (FPGA-based real-time digital simulator at a fixedtime step of 10 μs and a constant switching frequency of 10 kHz. The simulator was developed using very high speed integrated circuit hardware description language (VHDL, making it versatile and moveable. Hardware-in-the-loop (HIL simulation results are presented to justify the MATLAB simulation results during supply voltage variation of the three phase z-source ac-ac converter. The reliability analysis has been applied to the converter to find out the failure rate of its different components.

  18. Reliability of some ageing nuclear power plant systems: a simple stochastic model

    International Nuclear Information System (INIS)

    Suarez Antola, R.

    2007-01-01

    The random number of failure-related events in certain repairable ageing systems, like certain nuclear power plant components, during a given time interval, may be often modelled by a compound Poisson distribution. One of these is the Polya-Aeppli distribution. The derivation of a stationary Polya-Aeppli distribution as a limiting distribution of rare events for stationary Bernouilli trials with first order Markov dependence is considered. But if the parameters of the Polya-Aeppli distribution are suitable time functions, we could expect that the resulting distribution would allow us to take into account the distribution of failure-related events in an ageing system. Assuming that a critical number of damages produce an emergent failure, the abovementioned results can be applied in a reliability analysis. It is natural to ask under what conditions a Polya-Aeppli distribution could be a limiting distribution for non-homogeneous Bernouilli trials with first order Markov dependence. In this paper this problem is analyzed and possible applications of the obtained results to ageing or deteriorating nuclear power plant components are considered. The two traditional ways of modelling repairable systems in reliability theory: the - as bad as old - concept, that assumes that the replaced component is exactly under the same conditions as was the aged component before failure, and the - as good as new - concept, that assumes that the new component is under the same conditions of the replaced component when it was new, are briefly discussed in relation with the findings of the present work

  19. A stochastic simulation model for reliable PV system sizing providing for solar radiation fluctuations

    International Nuclear Information System (INIS)

    Kaplani, E.; Kaplanis, S.

    2012-01-01

    Highlights: ► Solar radiation data for European cities follow the Extreme Value or Weibull distribution. ► Simulation model for the sizing of SAPV systems based on energy balance and stochastic analysis. ► Simulation of PV Generator-Loads-Battery Storage System performance for all months. ► Minimum peak power and battery capacity required for reliable SAPV sizing for various European cities. ► Peak power and battery capacity reduced by more than 30% for operation 95% success rate. -- Abstract: The large fluctuations observed in the daily solar radiation profiles affect highly the reliability of the PV system sizing. Increasing the reliability of the PV system requires higher installed peak power (P m ) and larger battery storage capacity (C L ). This leads to increased costs, and makes PV technology less competitive. This research paper presents a new stochastic simulation model for stand-alone PV systems, developed to determine the minimum installed P m and C L for the PV system to be energy independent. The stochastic simulation model developed, makes use of knowledge acquired from an in-depth statistical analysis of the solar radiation data for the site, and simulates the energy delivered, the excess energy burnt, the load profiles and the state of charge of the battery system for the month the sizing is applied, and the PV system performance for the entire year. The simulation model provides the user with values for the autonomy factor d, simulating PV performance in order to determine the minimum P m and C L depending on the requirements of the application, i.e. operation with critical or non-critical loads. The model makes use of NASA’s Surface meteorology and Solar Energy database for the years 1990–2004 for various cities in Europe with a different climate. The results obtained with this new methodology indicate a substantial reduction in installed peak power and battery capacity, both for critical and non-critical operation, when compared to

  20. Physics-Based Stress Corrosion Cracking Component Reliability Model cast in an R7-Compatible Cumulative Damage Framework

    Energy Technology Data Exchange (ETDEWEB)

    Unwin, Stephen D.; Lowry, Peter P.; Layton, Robert F.; Toloczko, Mychailo B.; Johnson, Kenneth I.; Sanborn, Scott E.

    2011-07-01

    This is a working report drafted under the Risk-Informed Safety Margin Characterization pathway of the Light Water Reactor Sustainability Program, describing statistical models of passives component reliabilities.