WorldWideScience

Sample records for order reliability method

  1. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Baran, Ismet; Tutum, Cem Celal; Hattel, Jesper Henri

    2013-01-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles...... with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature (Tmax) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative...

  2. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    Science.gov (United States)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  3. Reliability-based design optimization via high order response surface method

    International Nuclear Information System (INIS)

    Li, Hong Shuang

    2013-01-01

    To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.

  4. Sensitivity Weaknesses in Application of some Statistical Distribution in First Order Reliability Methods

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Enevoldsen, I.

    1993-01-01

    It has been observed and shown that in some examples a sensitivity analysis of the first order reliability index results in increasing reliability index, when the standard deviation for a stochastic variable is increased while the expected value is fixed. This unfortunate behaviour can occur when...... a stochastic variable is modelled by an asymmetrical density function. For lognormally, Gumbel and Weibull distributed stochastic variables it is shown for which combinations of the/3-point, the expected value and standard deviation the weakness can occur. In relation to practical application the behaviour...... is probably rather infrequent. A simple example is shown as illustration and to exemplify that for second order reliability methods and for exact calculations of the probability of failure this behaviour is much more infrequent....

  5. Uncertainty analysis of nonlinear systems employing the first-order reliability method

    International Nuclear Information System (INIS)

    Choi, Chan Kyu; Yoo, Hong Hee

    2012-01-01

    In most mechanical systems, properties of the system elements have uncertainties due to several reasons. For example, mass, stiffness coefficient of a spring, damping coefficient of a damper or friction coefficients have uncertain characteristics. The uncertain characteristics of the elements have a direct effect on the system performance uncertainty. It is very important to estimate the performance uncertainty since the performance uncertainty is directly related to manufacturing yield and consumer satisfaction. Due to this reason, the performance uncertainty should be estimated accurately and considered in the system design. In this paper, performance measures are defined for nonlinear vibration systems and the performance measure uncertainties are estimated employing the first order reliability method (FORM). It was found that the FORM could provide good results in spite of the system nonlinear characteristics. Comparing to the results obtained by Monte Carlo Simulation (MCS), the accuracy of the uncertainty analysis results obtained by the FORM is validated

  6. Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.

    Energy Technology Data Exchange (ETDEWEB)

    Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee; Neary, Vincent Sinclair

    2014-09-01

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours. In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters

  7. Bayesian methods in reliability

    Science.gov (United States)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  8. Efficient Estimation of Extreme Non-linear Roll Motions using the First-order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2007-01-01

    In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... frequency domain methods can be applied. To non-linear responses like the roll motion, standard methods like direct time domain simulations are not feasible due to the required computational time. However, the statistical distribution of non-linear ship responses can be estimated very accurately using...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...

  9. Prediction of the shape of inline wave force and free surface elevation using First Order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Ghadirian, Amin; Bredmose, Henrik; Schløer, Signe

    2017-01-01

    theory, that is, the most likely time history of inline force around a force peak of given value. The results of FORM and NewForce are linearly identical and show only minor deviations at second order. The FORM results are then compared to wave averaged measurements of the same criteria for crest height......In design of substructures for offshore wind turbines, the extreme wave loads which are of interest in Ultimate Limit States are often estimated by choosing extreme events from linear random sea states and replacing them by either stream function wave theory or the NewWave theory of a certain...... design wave height. As these wave theories super from limitations such as symmetry around the crest, other methods to estimate the wave loads are needed. In the present paper, the First Order Reliability Method, FORM, is used systematically to estimate the most likely extreme wave shapes. Two parameters...

  10. An enhanced unified uncertainty analysis approach based on first order reliability method with single-level optimization

    International Nuclear Information System (INIS)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; Tooren, Michel van

    2013-01-01

    In engineering, there exist both aleatory uncertainties due to the inherent variation of the physical system and its operational environment, and epistemic uncertainties due to lack of knowledge and which can be reduced with the collection of more data. To analyze the uncertain distribution of the system performance under both aleatory and epistemic uncertainties, combined probability and evidence theory can be employed to quantify the compound effects of the mixed uncertainties. The existing First Order Reliability Method (FORM) based Unified Uncertainty Analysis (UUA) approach nests the optimization based interval analysis in the improved Hasofer–Lind–Rackwitz–Fiessler (iHLRF) algorithm based Most Probable Point (MPP) searching procedure, which is computationally inhibitive for complex systems and may encounter convergence problem as well. Therefore, in this paper it is proposed to use general optimization solvers to search MPP in the outer loop and then reformulate the double-loop optimization problem into an equivalent single-level optimization (SLO) problem, so as to simplify the uncertainty analysis process, improve the robustness of the algorithm, and alleviate the computational complexity. The effectiveness and efficiency of the proposed method is demonstrated with two numerical examples and one practical satellite conceptual design problem. -- Highlights: ► Uncertainty analysis under mixed aleatory and epistemic uncertainties is studied. ► A unified uncertainty analysis method is proposed with combined probability and evidence theory. ► The traditional nested analysis method is converted to single level optimization for efficiency. ► The effectiveness and efficiency of the proposed method are testified with three examples

  11. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...... of the uncertainties and their interplay is the developed, step-by-step. The concepts presented are illustrated by numerous examples throughout the text....

  12. Probabilistic modelling of overflow, surcharge and flooding in urban drainage using the first-order reliability method and parameterization of local rain series

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Willems, Patrick

    2007-01-01

    Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms...... or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the First Order Reliability Method (FORM). To apply this method, a long rainfall time series was divided in rain storms (rain events), and each rain...

  13. Probabilistic modelling of overflow, surcharge and flooding in urban drainage using the first-order reliability method and parameterization of local rain series.

    Science.gov (United States)

    Thorndahl, S; Willems, P

    2008-01-01

    Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.

  14. 76 FR 66055 - North American Electric Reliability Corporation; Order Approving Interpretation of Reliability...

    Science.gov (United States)

    2011-10-25

    ...\\ Mandatory Reliability Standards for the Bulk-Power System, Order No. 693, FERC Stats. & Regs. ] 31,242... materially affected'' by Bulk-Power System reliability may request an interpretation of a Reliability... Electric Reliability Corporation; Order Approving Interpretation of Reliability Standard; Before...

  15. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  16. 76 FR 66057 - North American Electric Reliability Corporation; Order Approving Regional Reliability Standard

    Science.gov (United States)

    2011-10-25

    ... Reliability Standard that is necessitated by a physical difference in the Bulk-Power System.\\7\\ \\7\\ Order No... Reliability Standards for the Bulk-Power System, Order No. 693, FERC Stats. & Regs. ] 31,242, order on reh'g... electric system event analyses and thereby improve system reliability by promoting improved system design...

  17. Application of reliability methods in Ontario Hydro

    International Nuclear Information System (INIS)

    Jeppesen, R.; Ravishankar, T.J.

    1985-01-01

    Ontario Hydro have established a reliability program in support of its substantial nuclear program. Application of the reliability program to achieve both production and safety goals is described. The value of such a reliability program is evident in the record of Ontario Hydro's operating nuclear stations. The factors which have contributed to the success of the reliability program are identified as line management's commitment to reliability; selective and judicious application of reliability methods; establishing performance goals and monitoring the in-service performance; and collection, distribution, review and utilization of performance information to facilitate cost-effective achievement of goals and improvements. (orig.)

  18. Structural reliability methods: Code development status

    Science.gov (United States)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-05-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  19. A reliability evaluation method for NPP safety DCS application software

    International Nuclear Information System (INIS)

    Li Yunjian; Zhang Lei; Liu Yuan

    2014-01-01

    In the field of nuclear power plant (NPP) digital i and c application, reliability evaluation for safety DCS application software is a key obstacle to be removed. In order to quantitatively evaluate reliability of NPP safety DCS application software, this paper propose a reliability evaluating method based on software development life cycle every stage's v and v defects density characteristics, by which the operating reliability level of the software can be predicted before its delivery, and helps to improve the reliability of NPP safety important software. (authors)

  20. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  1. Evaluation of structural reliability using simulation methods

    Directory of Open Access Journals (Sweden)

    Baballëku Markel

    2015-01-01

    Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.

  2. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  3. A novel reliability evaluation method for large engineering systems

    Directory of Open Access Journals (Sweden)

    Reda Farag

    2016-06-01

    Full Text Available A novel reliability evaluation method for large nonlinear engineering systems excited by dynamic loading applied in time domain is presented. For this class of problems, the performance functions are expected to be function of time and implicit in nature. Available first- or second-order reliability method (FORM/SORM will be challenging to estimate reliability of such systems. Because of its inefficiency, the classical Monte Carlo simulation (MCS method also cannot be used for large nonlinear dynamic systems. In the proposed approach, only tens instead of hundreds or thousands of deterministic evaluations at intelligently selected points are used to extract the reliability information. A hybrid approach, consisting of the stochastic finite element method (SFEM developed by the author and his research team using FORM, response surface method (RSM, an interpolation scheme, and advanced factorial schemes, is proposed. The method is clarified with the help of several numerical examples.

  4. Reliability analysis of neutron transport simulation using Monte Carlo method

    International Nuclear Information System (INIS)

    Souza, Bismarck A. de; Borges, Jose C.

    1995-01-01

    This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs

  5. First-order transitions and the multihistogram method

    International Nuclear Information System (INIS)

    Bhanot, G.; Lippert, T.; Schilling, K.; Ueberholz, P.

    1992-01-01

    We describe how the multihistogram method can be used to get reliable results from simulations in the critical region of first-order transitions even in the presence of severe hysteresis effects. (orig.)

  6. A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE

    Directory of Open Access Journals (Sweden)

    GEE-YONG PARK

    2014-02-01

    Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.

  7. Reliability of Estimation Pile Load Capacity Methods

    Directory of Open Access Journals (Sweden)

    Yudhi Lastiasih

    2014-04-01

    Full Text Available None of numerous previous methods for predicting pile capacity is known how accurate any of them are when compared with the actual ultimate capacity of piles tested to failure. The author’s of the present paper have conducted such an analysis, based on 130 data sets of field loading tests. Out of these 130 data sets, only 44 could be analysed, of which 15 were conducted until the piles actually reached failure. The pile prediction methods used were: Brinch Hansen’s method (1963, Chin’s method (1970, Decourt’s Extrapolation Method (1999, Mazurkiewicz’s method (1972, Van der Veen’s method (1953, and the Quadratic Hyperbolic Method proposed by Lastiasih et al. (2012. It was obtained that all the above methods were sufficiently reliable when applied to data from pile loading tests that loaded to reach failure. However, when applied to data from pile loading tests that loaded without reaching failure, the methods that yielded lower values for correction factor N are more recommended. Finally, the empirical method of Reese and O’Neill (1988 was found to be reliable enough to be used to estimate the Qult of a pile foundation based on soil data only.

  8. 76 FR 23801 - North American Electric Reliability Corporation; Order Approving Reliability Standard

    Science.gov (United States)

    2011-04-28

    ... have an operating plan and facilities for backup functionality to ensure Bulk-Power System reliability... entity's primary control center on the reliability of the Bulk-Power System. \\1\\ Mandatory Reliability... potential impact of a violation of the Requirement on the reliability of the Bulk-Power System. The...

  9. 18 CFR 39.6 - Conflict of a Reliability Standard with a Commission Order.

    Science.gov (United States)

    2010-04-01

    ... Reliability Standard with a Commission Order. 39.6 Section 39.6 Conservation of Power and Water Resources..., APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.6 Conflict of a Reliability Standard with... Transmission Organization determines that a Reliability Standard may conflict with a function, rule, order...

  10. An overview of reliability methods in mechanical and structural design

    Science.gov (United States)

    Wirsching, P. H.; Ortiz, K.; Lee, S. J.

    1987-01-01

    An evaluation is made of modern methods of fast probability integration and Monte Carlo treatment for the assessment of structural systems' and components' reliability. Fast probability integration methods are noted to be more efficient than Monte Carlo ones. This is judged to be an important consideration when several point probability estimates must be made in order to construct a distribution function. An example illustrating the relative efficiency of the various methods is included.

  11. Reliability and risk analysis methods research plan

    International Nuclear Information System (INIS)

    1984-10-01

    This document presents a plan for reliability and risk analysis methods research to be performed mainly by the Reactor Risk Branch (RRB), Division of Risk Analysis and Operations (DRAO), Office of Nuclear Regulatory Research. It includes those activities of other DRAO branches which are very closely related to those of the RRB. Related or interfacing programs of other divisions, offices and organizations are merely indicated. The primary use of this document is envisioned as an NRC working document, covering about a 3-year period, to foster better coordination in reliability and risk analysis methods development between the offices of Nuclear Regulatory Research and Nuclear Reactor Regulation. It will also serve as an information source for contractors and others to more clearly understand the objectives, needs, programmatic activities and interfaces together with the overall logical structure of the program

  12. Issues in benchmarking human reliability analysis methods: A literature review

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Hendrickson, Stacey M.L.; Forester, John A.; Tran, Tuan Q.; Lois, Erasmia

    2010-01-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.

  13. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  14. Choosing a heuristic and root node for edge ordering in BDD-based network reliability analysis

    International Nuclear Information System (INIS)

    Mo, Yuchang; Xing, Liudong; Zhong, Farong; Pan, Zhusheng; Chen, Zhongyu

    2014-01-01

    In the Binary Decision Diagram (BDD)-based network reliability analysis, heuristics have been widely used to obtain a reasonably good ordering of edge variables. Orderings generated using different heuristics can lead to dramatically different sizes of BDDs, and thus dramatically different running times and memory usages for the analysis of the same network. Unfortunately, due to the nature of the ordering problem (i.e., being an NP-complete problem) no formal guidelines or rules are available for choosing a good heuristic or for choosing a high-performance root node to perform edge searching using a particular heuristic. In this work, we make novel contributions by proposing heuristic and root node selection methods based on the concept of boundary sets for the BDD-based network reliability analysis. Empirical studies show that the proposed selection methods can help to generate high-performance edge ordering for most of studied cases, enabling the efficient BDD-based reliability analysis of large-scale networks. The proposed methods are demonstrated on different types of networks, including square lattice networks, torus lattice networks and de Bruijn networks

  15. The order progress diagram : A supportive tool for diagnosing delivery reliability performance in make-to-order companies

    NARCIS (Netherlands)

    Soepenberg, G.D.; Land, M.J.; Gaalman, G.J.C.

    This paper describes the development of a new tool for facilitating the diagnosis of logistic improvement opportunities in make-to-order (MTO) companies. Competitiveness of these companies increasingly imposes needs upon delivery reliability. In order to achieve high delivery reliability, both the

  16. Review of Quantitative Software Reliability Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chu, T.L.; Yue, M.; Martinez-Guridi, M.; Lehner, J.

    2010-09-17

    The current U.S. Nuclear Regulatory Commission (NRC) licensing process for digital systems rests on deterministic engineering criteria. In its 1995 probabilistic risk assessment (PRA) policy statement, the Commission encouraged the use of PRA technology in all regulatory matters to the extent supported by the state-of-the-art in PRA methods and data. Although many activities have been completed in the area of risk-informed regulation, the risk-informed analysis process for digital systems has not yet been satisfactorily developed. Since digital instrumentation and control (I&C) systems are expected to play an increasingly important role in nuclear power plant (NPP) safety, the NRC established a digital system research plan that defines a coherent set of research programs to support its regulatory needs. One of the research programs included in the NRC's digital system research plan addresses risk assessment methods and data for digital systems. Digital I&C systems have some unique characteristics, such as using software, and may have different failure causes and/or modes than analog I&C systems; hence, their incorporation into NPP PRAs entails special challenges. The objective of the NRC's digital system risk research is to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems into NPP PRAs, and (2) using information on the risks of digital systems to support the NRC's risk-informed licensing and oversight activities. For several years, Brookhaven National Laboratory (BNL) has worked on NRC projects to investigate methods and tools for the probabilistic modeling of digital systems, as documented mainly in NUREG/CR-6962 and NUREG/CR-6997. However, the scope of this research principally focused on hardware failures, with limited reviews of software failure experience and software reliability methods. NRC also sponsored research at the Ohio State University investigating the modeling of

  17. Critical spare parts ordering decisions using conditional reliability and stochastic lead time

    International Nuclear Information System (INIS)

    Godoy, David R.; Pascual, Rodrigo; Knights, Peter

    2013-01-01

    Asset-intensive companies face great pressure to reduce operation costs and increase utilization. This scenario often leads to over-stress on critical equipment and its spare parts associated, affecting availability, reliability, and system performance. As these resources impact considerably on financial and operational structures, the opportunity is given by demand for decision-making methods for the management of spare parts processes. We proposed an ordering decision-aid technique which uses a measurement of spare performance, based on the stress–strength interference theory; which we have called Condition-Based Service Level (CBSL). We focus on Condition Managed Critical Spares (CMS), namely, spares which are expensive, highly reliable, with higher lead times, and are not available in store. As a mitigation measure, CMS are under condition monitoring. The aim of the paper is orienting the decision time for CMS ordering or just continuing the operation. The paper presents a graphic technique which considers a rule for decision based on both condition-based reliability function and a stochastic/fixed lead time. For the stochastic lead time case, results show that technique is effective to determine the time when the system operation is reliable and can withstand the lead time variability, satisfying a desired service level. Additionally, for the constant lead time case, the technique helps to define insurance spares. In conclusion, presented ordering decision rule is useful to asset managers for enhancing the operational continuity affected by spare parts

  18. Reliability methods in nuclear power plant ageing management

    International Nuclear Information System (INIS)

    Simola, K.

    1999-01-01

    The aim of nuclear power plant ageing management is to maintain an adequate safety level throughout the lifetime of the plant. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time-dependent degradation. The phases of ageing analyses are generally the identification of critical components, identification and evaluation of ageing effects, and development of mitigation methods. This thesis focuses on the use of reliability methods and analyses of plant- specific operating experience in nuclear power plant ageing studies. The presented applications and method development have been related to nuclear power plants, but many of the approaches can also be applied outside the nuclear industry. The thesis consists of a summary and seven publications. The summary provides an overview of ageing management and discusses the role of reliability methods in ageing analyses. In the publications, practical applications and method development are described in more detail. The application areas at component and system level are motor-operated valves and protection automation systems, for which experience-based ageing analyses have been demonstrated. Furthermore, Bayesian ageing models for repairable components have been developed, and the management of ageing by improving maintenance practices is discussed. Recommendations for improvement of plant information management in order to facilitate ageing analyses are also given. The evaluation and mitigation of ageing effects on structural components is addressed by promoting the use of probabilistic modelling of crack growth, and developing models for evaluation of the reliability of inspection results. (orig.)

  19. Reliability methods in nuclear power plant ageing management

    Energy Technology Data Exchange (ETDEWEB)

    Simola, K. [VTT Automation, Espoo (Finland). Industrial Automation

    1999-07-01

    The aim of nuclear power plant ageing management is to maintain an adequate safety level throughout the lifetime of the plant. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time-dependent degradation. The phases of ageing analyses are generally the identification of critical components, identification and evaluation of ageing effects, and development of mitigation methods. This thesis focuses on the use of reliability methods and analyses of plant- specific operating experience in nuclear power plant ageing studies. The presented applications and method development have been related to nuclear power plants, but many of the approaches can also be applied outside the nuclear industry. The thesis consists of a summary and seven publications. The summary provides an overview of ageing management and discusses the role of reliability methods in ageing analyses. In the publications, practical applications and method development are described in more detail. The application areas at component and system level are motor-operated valves and protection automation systems, for which experience-based ageing analyses have been demonstrated. Furthermore, Bayesian ageing models for repairable components have been developed, and the management of ageing by improving maintenance practices is discussed. Recommendations for improvement of plant information management in order to facilitate ageing analyses are also given. The evaluation and mitigation of ageing effects on structural components is addressed by promoting the use of probabilistic modelling of crack growth, and developing models for evaluation of the reliability of inspection results. (orig.)

  20. Human reliability analysis methods for probabilistic safety assessment

    International Nuclear Information System (INIS)

    Pyy, P.

    2000-11-01

    Human reliability analysis (HRA) of a probabilistic safety assessment (PSA) includes identifying human actions from safety point of view, modelling the most important of them in PSA models, and assessing their probabilities. As manifested by many incidents and studies, human actions may have both positive and negative effect on safety and economy. Human reliability analysis is one of the areas of probabilistic safety assessment (PSA) that has direct applications outside the nuclear industry. The thesis focuses upon developments in human reliability analysis methods and data. The aim is to support PSA by extending the applicability of HRA. The thesis consists of six publications and a summary. The summary includes general considerations and a discussion about human actions in the nuclear power plant (NPP) environment. A condensed discussion about the results of the attached publications is then given, including new development in methods and data. At the end of the summary part, the contribution of the publications to good practice in HRA is presented. In the publications, studies based on the collection of data on maintenance-related failures, simulator runs and expert judgement are presented in order to extend the human reliability analysis database. Furthermore, methodological frameworks are presented to perform a comprehensive HRA, including shutdown conditions, to study reliability of decision making, and to study the effects of wrong human actions. In the last publication, an interdisciplinary approach to analysing human decision making is presented. The publications also include practical applications of the presented methodological frameworks. (orig.)

  1. An Evaluation Method of Equipment Reliability Configuration Management

    Science.gov (United States)

    Wang, Wei; Feng, Weijia; Zhang, Wei; Li, Yuan

    2018-01-01

    At present, many equipment development companies have been aware of the great significance of reliability of the equipment development. But, due to the lack of effective management evaluation method, it is very difficult for the equipment development company to manage its own reliability work. Evaluation method of equipment reliability configuration management is to determine the reliability management capabilities of equipment development company. Reliability is not only designed, but also managed to achieve. This paper evaluates the reliability management capabilities by reliability configuration capability maturity model(RCM-CMM) evaluation method.

  2. Validation of Land Cover Products Using Reliability Evaluation Methods

    Directory of Open Access Journals (Sweden)

    Wenzhong Shi

    2015-06-01

    Full Text Available Validation of land cover products is a fundamental task prior to data applications. Current validation schemes and methods are, however, suited only for assessing classification accuracy and disregard the reliability of land cover products. The reliability evaluation of land cover products should be undertaken to provide reliable land cover information. In addition, the lack of high-quality reference data often constrains validation and affects the reliability results of land cover products. This study proposes a validation schema to evaluate the reliability of land cover products, including two methods, namely, result reliability evaluation and process reliability evaluation. Result reliability evaluation computes the reliability of land cover products using seven reliability indicators. Process reliability evaluation analyzes the reliability propagation in the data production process to obtain the reliability of land cover products. Fuzzy fault tree analysis is introduced and improved in the reliability analysis of a data production process. Research results show that the proposed reliability evaluation scheme is reasonable and can be applied to validate land cover products. Through the analysis of the seven indicators of result reliability evaluation, more information on land cover can be obtained for strategic decision-making and planning, compared with traditional accuracy assessment methods. Process reliability evaluation without the need for reference data can facilitate the validation and reflect the change trends of reliabilities to some extent.

  3. A REVIEW OF ORDER PICKING IMPROVEMENT METHODS

    Directory of Open Access Journals (Sweden)

    Johan Oscar Ong

    2014-09-01

    Full Text Available As a crucial and one of the most important parts of warehousing, order picking often raises discussion between warehousing professionals, resulting in various studies aiming to analyze how order picking activity can be improved from various perspective. This paper reviews various past researches on order picking improvement, and the various methods those studies analyzed or developed. This literature review is based on twenty research articles on order picking improvement viewed from four different perspectives: Automation (specifically, stock-to-picker system, storage assignment policy, order batching, and order picking sequencing. By reviewing these studies, we try to identify the most prevalent order picking improvement approach to order picking improvement. Keywords: warehousing; stock-to-picker; storage assignment; order batching; order picking sequencing; improvement

  4. Parametric Methods for Order Tracking Analysis

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm

    2017-01-01

    Order tracking analysis is often used to find the critical speeds at which structural resonances are excited by a rotating machine. Typically, order tracking analysis is performed via non-parametric methods. In this report, however, we demonstrate some of the advantages of using a parametric method...

  5. Reliability studies of diagnostic methods in Indian traditional Ayurveda medicine

    DEFF Research Database (Denmark)

    Kurande, Vrinda Hitendra; Waagepetersen, Rasmus; Toft, Egon

    2013-01-01

    as prakriti classification), method development (pulse diagnosis), quality assurance for diagnosis and treatment and in the conduct of clinical studies. Several reliability studies are conducted in western medicine. The investigation of the reliability of traditional Chinese, Japanese and Sasang medicine...

  6. Structural reliability calculation method based on the dual neural network and direct integration method.

    Science.gov (United States)

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  7. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  8. Extrapolation Method for System Reliability Assessment

    DEFF Research Database (Denmark)

    Qin, Jianjun; Nishijima, Kazuyoshi; Faber, Michael Havbro

    2012-01-01

    of integrals with scaled domains. The performance of this class of approximation depends on the approach applied for the scaling and the functional form utilized for the extrapolation. A scheme for this task is derived here taking basis in the theory of asymptotic solutions to multinormal probability integrals......The present paper presents a new scheme for probability integral solution for system reliability analysis, which takes basis in the approaches by Naess et al. (2009) and Bucher (2009). The idea is to evaluate the probability integral by extrapolation, based on a sequence of MC approximations...... that the proposed scheme is efficient and adds to generality for this class of approximations for probability integrals....

  9. A simple reliability block diagram method for safety integrity verification

    International Nuclear Information System (INIS)

    Guo Haitao; Yang Xianhui

    2007-01-01

    IEC 61508 requires safety integrity verification for safety related systems to be a necessary procedure in safety life cycle. PFD avg must be calculated to verify the safety integrity level (SIL). Since IEC 61508-6 does not give detailed explanations of the definitions and PFD avg calculations for its examples, it is difficult for common reliability or safety engineers to understand when they use the standard as guidance in practice. A method using reliability block diagram is investigated in this study in order to provide a clear and feasible way of PFD avg calculation and help those who take IEC 61508-6 as their guidance. The method finds mean down times (MDTs) of both channel and voted group first and then PFD avg . The calculated results of various voted groups are compared with those in IEC61508 part 6 and Ref. [Zhang T, Long W, Sato Y. Availability of systems with self-diagnostic components-applying Markov model to IEC 61508-6. Reliab Eng System Saf 2003;80(2):133-41]. An interesting outcome can be realized from the comparison. Furthermore, although differences in MDT of voted groups exist between IEC 61508-6 and this paper, PFD avg of voted groups are comparatively close. With detailed description, the method of RBD presented can be applied to the quantitative SIL verification, showing a similarity of the method in IEC 61508-6

  10. Fuzzy stochastic generalized reliability studies on embankment systems based on first-order approximation theorem

    Directory of Open Access Journals (Sweden)

    Wang Yajun

    2008-12-01

    Full Text Available In order to address the complex uncertainties caused by interfacing between the fuzziness and randomness of the safety problem for embankment engineering projects, and to evaluate the safety of embankment engineering projects more scientifically and reasonably, this study presents the fuzzy logic modeling of the stochastic finite element method (SFEM based on the harmonious finite element (HFE technique using a first-order approximation theorem. Fuzzy mathematical models of safety repertories were introduced into the SFEM to analyze the stability of embankments and foundations in order to describe the fuzzy failure procedure for the random safety performance function. The fuzzy models were developed with membership functions with half depressed gamma distribution, half depressed normal distribution, and half depressed echelon distribution. The fuzzy stochastic mathematical algorithm was used to comprehensively study the local failure mechanism of the main embankment section near Jingnan in the Yangtze River in terms of numerical analysis for the probability integration of reliability on the random field affected by three fuzzy factors. The result shows that the middle region of the embankment is the principal zone of concentrated failure due to local fractures. There is also some local shear failure on the embankment crust. This study provides a referential method for solving complex multi-uncertainty problems in engineering safety analysis.

  11. Reliability of non-destructive testing methods

    International Nuclear Information System (INIS)

    Broekhoven, M.J.G.

    1988-01-01

    This contribution regards the results of an evaluation of the reliability of radiography (X-rays and gamma-rays), manual-, and mechanized/automated ultrasonic examination by generally accepted codes/rules, with respect to detection, characterization and sizing/localization of defects. The evaluation is based on the results of examinations, by a number of teams, of 30 test plates, 30 and 50 mm thickness, containing V,U, X and K-shaped welds each containing several types of imperfections (211) in total) typical for steel arc fusion welding, such as porosity, inclusions, lack of fusion or penetration and cracks. Besides, some results are presented obtained from research on advanced UT-techniques, viz. the time-of-flight-diffraction and flaw-tip deflection technique. (author)

  12. Reliability of non-destructive testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Broekhoven, M J.G. [Ministry of Social Affairs, (Netherlands)

    1988-12-31

    This contribution regards the results of an evaluation of the reliability of radiography (X-rays and gamma-rays), manual-, and mechanized/automated ultrasonic examination by generally accepted codes/rules, with respect to detection, characterization and sizing/localization of defects. The evaluation is based on the results of examinations, by a number of teams, of 30 test plates, 30 and 50 mm thickness, containing V,U, X and K-shaped welds each containing several types of imperfections (211) in total) typical for steel arc fusion welding, such as porosity, inclusions, lack of fusion or penetration and cracks. Besides, some results are presented obtained from research on advanced UT-techniques, viz. the time-of-flight-diffraction and flaw-tip deflection technique. (author). 4 refs.

  13. Method for assessing reliability of a network considering probabilistic safety assessment

    International Nuclear Information System (INIS)

    Cepin, M.

    2005-01-01

    A method for assessment of reliability of the network is developed, which uses the features of the fault tree analysis. The method is developed in a way that the increase of the network under consideration does not require significant increase of the model. The method is applied to small examples of network consisting of a small number of nodes and a small number of their connections. The results give the network reliability. They identify equipment, which is to be carefully maintained in order that the network reliability is not reduced, and equipment, which is a candidate for redundancy, as this would improve network reliability significantly. (author)

  14. Processing the Order of Symbolic Numbers: A Reliable and Unique Predictor of Arithmetic Fluency

    Directory of Open Access Journals (Sweden)

    Stephan E. Vogel

    2017-12-01

    Full Text Available A small but growing body of evidence suggests a link between individual differences in processing the order of numerical symbols (e.g., deciding whether a set of digits is arranged in ascending/descending order or not and arithmetic achievement. However, the reliability of behavioral correlates measuring symbolic and non-symbolic numerical order processing and their relationship to arithmetic abilities remain poorly understood. The present study aims to fill this knowledge gap by examining the behavioral correlates of numerical and non-numerical order processing and their unique associations with arithmetic fluency at two different time points within the same sample of individuals. Thirty-two right-handed adults performed three order judgment tasks consisting of symbolic numbers (i.e., digits, non-symbolic numbers (i.e., dots, and letters of the alphabet. Specifically, participants had to judge as accurately and as quickly as possible whether stimuli were ordered correctly (in ascending/descending order, e.g., 2-3-4; ●●●●-●●●-●●; B-C-D or not (e.g., 4-5-3; ●●●●-●●●●●-●●●; D-E-C. Results of this study demonstrate that numerical order judgments are reliable measurements (i.e., high test-retest reliability, and that the observed relationship between symbolic number processing and arithmetic fluency accounts for a unique and reliable portion of variance over and above the non-symbolic number and the letter conditions. The differential association of symbolic and non-symbolic numbers with arithmetic support the view that processing the order of symbolic and non-symbolic numbers engages different cognitive mechanisms, and that the ability to process ordinal relationships of symbolic numbers is a reliable and unique predictor of arithmetic fluency.

  15. Mathematical Methods in Survival Analysis, Reliability and Quality of Life

    CERN Document Server

    Huber, Catherine; Mesbah, Mounir

    2008-01-01

    Reliability and survival analysis are important applications of stochastic mathematics (probability, statistics and stochastic processes) that are usually covered separately in spite of the similarity of the involved mathematical theory. This title aims to redress this situation: it includes 21 chapters divided into four parts: Survival analysis, Reliability, Quality of life, and Related topics. Many of these chapters were presented at the European Seminar on Mathematical Methods for Survival Analysis, Reliability and Quality of Life in 2006.

  16. Advances in methods and applications of reliability and safety analysis

    International Nuclear Information System (INIS)

    Fieandt, J.; Hossi, H.; Laakso, K.; Lyytikaeinen, A.; Niemelae, I.; Pulkkinen, U.; Pulli, T.

    1986-01-01

    The know-how of the reliability and safety design and analysis techniques of Vtt has been established over several years in analyzing the reliability in the Finnish nuclear power plants Loviisa and Olkiluoto. This experience has been later on applied and developed to be used in the process industry, conventional power industry, automation and electronics. VTT develops and transfers methods and tools for reliability and safety analysis to the private and public sectors. The technology transfer takes place in joint development projects with potential users. Several computer-aided methods, such as RELVEC for reliability modelling and analysis, have been developed. The tool developed are today used by major Finnish companies in the fields of automation, nuclear power, shipbuilding and electronics. Development of computer-aided and other methods needed in analysis of operating experience, reliability or safety is further going on in a number of research and development projects

  17. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    Energy Technology Data Exchange (ETDEWEB)

    Ronald Laurids Boring

    2010-11-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  18. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    International Nuclear Information System (INIS)

    Boring, Ronald Laurids

    2010-01-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  19. A method to assign failure rates for piping reliability assessments

    International Nuclear Information System (INIS)

    Gamble, R.M.; Tagart, S.W. Jr.

    1991-01-01

    This paper reports on a simplified method that has been developed to assign failure rates that can be used in reliability and risk studies of piping. The method can be applied on a line-by-line basis by identifying line and location specific attributes that can lead to piping unreliability from in-service degradation mechanisms and random events. A survey of service experience for nuclear piping reliability also was performed. The data from this survey provides a basis for identifying in-service failure attributes and assigning failure rates for risk and reliability studies

  20. Modifying nodal pricing method considering market participants optimality and reliability

    Directory of Open Access Journals (Sweden)

    A. R. Soofiabadi

    2015-06-01

    Full Text Available This paper develops a method for nodal pricing and market clearing mechanism considering reliability of the system. The effects of components reliability on electricity price, market participants’ profit and system social welfare is considered. This paper considers reliability both for evaluation of market participant’s optimality as well as for fair pricing and market clearing mechanism. To achieve fair pricing, nodal price has been obtained through a two stage optimization problem and to achieve fair market clearing mechanism, comprehensive criteria has been introduced for optimality evaluation of market participant. Social welfare of the system and system efficiency are increased under proposed modified nodal pricing method.

  1. Evaluation of Information Requirements of Reliability Methods in Engineering Design

    DEFF Research Database (Denmark)

    Marini, Vinicius Kaster; Restrepo-Giraldo, John Dairo; Ahmed-Kristensen, Saeema

    2010-01-01

    This paper aims to characterize the information needed to perform methods for robustness and reliability, and verify their applicability to early design stages. Several methods were evaluated on their support to synthesis in engineering design. Of those methods, FMEA, FTA and HAZOP were selected...

  2. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    OpenAIRE

    Hai An; Ling Zhou; Hui Sun

    2016-01-01

    Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new...

  3. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  4. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  5. Level III Reliability methods feasible for complex structures

    NARCIS (Netherlands)

    Waarts, P.H.; Boer, A. de

    2001-01-01

    The paper describes the comparison between three types of reliability methods: code type level I used by a designer, full level I and a level III method. Two cases that are typical for civil engineering practise, a cable-stayed subjected to traffic load and the installation of a soil retaining sheet

  6. Developing a reliable signal wire attachment method for rail.

    Science.gov (United States)

    2014-11-01

    The goal of this project was to develop a better attachment method for rail signal wires to improve the reliability of signaling : systems. EWI conducted basic research into the failure mode of current attachment methods and developed and tested a ne...

  7. Recent advances in computational structural reliability analysis methods

    Science.gov (United States)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  8. Verification of practicability of quantitative reliability evaluation method (De-BDA) in nuclear power plants

    International Nuclear Information System (INIS)

    Takahashi, Kinshiro; Yukimachi, Takeo.

    1988-01-01

    A variety of methods have been applied to study of reliability analysis in which human factors are included in order to enhance the safety and availability of nuclear power plants. De-BDA (Detailed Block Diagram Analysis) is one of such mehtods developed with the objective of creating a more comprehensive and understandable tool for quantitative analysis of reliability associated with plant operations. The practicability of this method has been verified by applying it to reliability analysis of various phases of plant operation as well as evaluation of enhanced man-machine interface in the central control room. (author)

  9. Reliably detectable flaw size for NDE methods that use calibration

    Science.gov (United States)

    Koshti, Ajay M.

    2017-04-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  10. HUMAN RELIABILITY ANALYSIS DENGAN PENDEKATAN COGNITIVE RELIABILITY AND ERROR ANALYSIS METHOD (CREAM

    Directory of Open Access Journals (Sweden)

    Zahirah Alifia Maulida

    2015-01-01

    Full Text Available Kecelakaan kerja pada bidang grinding dan welding menempati urutan tertinggi selama lima tahun terakhir di PT. X. Kecelakaan ini disebabkan oleh human error. Human error terjadi karena pengaruh lingkungan kerja fisik dan non fisik.Penelitian kali menggunakan skenario untuk memprediksi serta mengurangi kemungkinan terjadinya error pada manusia dengan pendekatan CREAM (Cognitive Reliability and Error Analysis Method. CREAM adalah salah satu metode human reliability analysis yang berfungsi untuk mendapatkan nilai Cognitive Failure Probability (CFP yang dapat dilakukan dengan dua cara yaitu basic method dan extended method. Pada basic method hanya akan didapatkan nilai failure probabailty secara umum, sedangkan untuk extended method akan didapatkan CFP untuk setiap task. Hasil penelitian menunjukkan faktor- faktor yang mempengaruhi timbulnya error pada pekerjaan grinding dan welding adalah kecukupan organisasi, kecukupan dari Man Machine Interface (MMI & dukungan operasional, ketersediaan prosedur/ perencanaan, serta kecukupan pelatihan dan pengalaman. Aspek kognitif pada pekerjaan grinding yang memiliki nilai error paling tinggi adalah planning dengan nilai CFP 0.3 dan pada pekerjaan welding yaitu aspek kognitif execution dengan nilai CFP 0.18. Sebagai upaya untuk mengurangi nilai error kognitif pada pekerjaan grinding dan welding rekomendasi yang diberikan adalah memberikan training secara rutin, work instrucstion yang lebih rinci dan memberikan sosialisasi alat. Kata kunci: CREAM (cognitive reliability and error analysis method, HRA (human reliability analysis, cognitive error Abstract The accidents in grinding and welding sectors were the highest cases over the last five years in PT. X and it caused by human error. Human error occurs due to the influence of working environment both physically and non-physically. This study will implement an approaching scenario called CREAM (Cognitive Reliability and Error Analysis Method. CREAM is one of human

  11. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  12. Assessment of the reliability of ultrasonic inspection methods

    International Nuclear Information System (INIS)

    Haines, N.F.; Langston, D.B.; Green, A.J.; Wilson, R.

    1982-01-01

    The reliability of NDT techniques has remained an open question for many years. A reliable technique may be defined as one that, when rigorously applied by a number of inspection teams, consistently finds then correctly sizes all defects of concern. In this paper we report an assessment of the reliability of defect detection by manual ultrasonic methods applied to the inspection of thick section pressure vessel weldments. Initially we consider the available data relating to the inherent physical capabilities of ultrasonic techniques to detect cracks in weldment and then, independently, we assess the likely variability in team to team performance when several teams are asked to follow the same specified test procedure. The two aspects of 'capability' and 'variability' are brought together to provide quantitative estimates of the overall reliability of ultrasonic inspection of thick section pressure vessel weldments based on currently existing data. The final section of the paper considers current research programmes on reliability and presents a view on how these will help to further improve NDT reliability. (author)

  13. Investigation of MLE in nonparametric estimation methods of reliability function

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not

  14. Selected Methods For Increases Reliability The Of Electronic Systems Security

    Directory of Open Access Journals (Sweden)

    Paś Jacek

    2015-11-01

    Full Text Available The article presents the issues related to the different methods to increase the reliability of electronic security systems (ESS for example, a fire alarm system (SSP. Reliability of the SSP in the descriptive sense is a property preservation capacity to implement the preset function (e.g. protection: fire airport, the port, logistics base, etc., at a certain time and under certain conditions, e.g. Environmental, despite the possible non-compliance by a specific subset of elements this system. Analyzing the available literature on the ESS-SSP is not available studies on methods to increase the reliability (several works similar topics but moving with respect to the burglary and robbery (Intrusion. Based on the analysis of the set of all paths in the system suitability of the SSP for the scenario mentioned elements fire events (device critical because of security.

  15. A method of predicting the reliability of CDM coil insulation

    International Nuclear Information System (INIS)

    Kytasty, A.; Ogle, C.; Arrendale, H.

    1992-01-01

    This paper presents a method of predicting the reliability of the Collider Dipole Magnet (CDM) coil insulation design. The method proposes a probabilistic treatment of electrical test data, stress analysis, material properties variability and loading uncertainties to give the reliability estimate. The approach taken to predict reliability of design related failure modes of the CDM is to form analytical models of the various possible failure modes and their related mechanisms or causes, and then statistically assess the contributions of the various contributing variables. The probability of the failure mode occurring is interpreted as the number of times one would expect certain extreme situations to combine and randomly occur. One of the more complex failure modes of the CDM will be used to illustrate this methodology

  16. A reliable method for the stability analysis of structures ...

    African Journals Online (AJOL)

    The detection of structural configurations with singular tangent stiffness matrix is essential because they can be unstable. The secondary paths, especially in unstable buckling, can play the most important role in the loss of stability and collapse of the structure. A new method for reliable detection and accurate computation of ...

  17. Methods to compute reliabilities for genomic predictions of feed intake

    Science.gov (United States)

    For new traits without historical reference data, cross-validation is often the preferred method to validate reliability (REL). Time truncation is less useful because few animals gain substantial REL after the truncation point. Accurate cross-validation requires separating genomic gain from pedigree...

  18. Planning of operation & maintenance using risk and reliability based methods

    DEFF Research Database (Denmark)

    Florian, Mihai; Sørensen, John Dalsgaard

    2015-01-01

    Operation and maintenance (OM) of offshore wind turbines contributes with a substantial part of the total levelized cost of energy (LCOE). The objective of this paper is to present an application of risk- and reliability-based methods for planning of OM. The theoretical basis is presented...

  19. Assessment of reliability of Greulich and Pyle (gp) method for ...

    African Journals Online (AJOL)

    Background: Greulich and Pyle standards are the most widely used age estimation standards all over the world. The applicability of the Greulich and Pyle standards to populations which differ from their reference population is often questioned. This study aimed to assess the reliability of Greulich and Pyle (GP) method for ...

  20. Statistical Bayesian method for reliability evaluation based on ADT data

    Science.gov (United States)

    Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong

    2018-05-01

    Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.

  1. Survey of methods used to asses human reliability in the human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1988-01-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim to assess the state-of-the-art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participate in the HF-RBE, which is organised around two study cases: (1) analysis of routine functional test and maintenance procedures, with the aim to assess the probability of test-induced failures, the probability of failures to remain unrevealed, and the potential to initiate transients because of errors performed in the test; and (2) analysis of human actions during an operational transient, with the aim to assess the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. The paper briefly reports how the HF-RBE was structured and gives an overview of the methods that have been used for predicting human reliability in both study cases. The experience in applying these methods is discussed and the results obtained are compared. (author)

  2. Reliability analysis of idealized tunnel support system using probability-based methods with case studies

    Science.gov (United States)

    Gharouni-Nik, Morteza; Naeimi, Meysam; Ahadi, Sodayf; Alimoradi, Zahra

    2014-06-01

    In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute to stability of the tunnel structure are recognized owing to identify various aspects of reliability and sustainability in the system. The selection of efficient support methods for rock tunneling is a key factor in order to reduce the number of problems during construction and maintain the project cost and time within the limited budget and planned schedule. This paper introduces a smart approach by which decision-makers will be able to find the overall reliability of tunnel support system before selecting the final scheme of the lining system. Due to this research focus, engineering reliability which is a branch of statistics and probability is being appropriately applied to the field and much effort has been made to use it in tunneling while investigating the reliability of the lining support system for the tunnel structure. Therefore, reliability analysis for evaluating the tunnel support performance is the main idea used in this research. Decomposition approaches are used for producing system block diagram and determining the failure probability of the whole system. Effectiveness of the proposed reliability model of tunnel lining together with the recommended approaches is examined using several case studies and the final value of reliability obtained for different designing scenarios. Considering the idea of linear correlation between safety factors and reliability parameters, the values of isolated reliabilities determined for different structural components of tunnel support system. In order to determine individual safety factors, finite element modeling is employed for different structural subsystems and the results of numerical analyses are obtained in

  3. Reliability and discriminatory power of methods for dental plaque quantification

    Directory of Open Access Journals (Sweden)

    Daniela Prócida Raggio

    2010-04-01

    Full Text Available OBJECTIVE: This in situ study evaluated the discriminatory power and reliability of methods of dental plaque quantification and the relationship between visual indices (VI and fluorescence camera (FC to detect plaque. MATERIAL AND METHODS: Six volunteers used palatal appliances with six bovine enamel blocks presenting different stages of plaque accumulation. The presence of plaque with and without disclosing was assessed using VI. Images were obtained with FC and digital camera in both conditions. The area covered by plaque was assessed. Examinations were done by two independent examiners. Data were analyzed by Kruskal-Wallis and Kappa tests to compare different conditions of samples and to assess the inter-examiner reproducibility. RESULTS: Some methods presented adequate reproducibility. The Turesky index and the assessment of area covered by disclosed plaque in the FC images presented the highest discriminatory powers. CONCLUSION: The Turesky index and images with FC with disclosing present good reliability and discriminatory power in quantifying dental plaque.

  4. A generic method for estimating system reliability using Bayesian networks

    International Nuclear Information System (INIS)

    Doguc, Ozge; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples

  5. A generic method for estimating system reliability using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Doguc, Ozge [Stevens Institute of Technology, Hoboken, NJ 07030 (United States); Ramirez-Marquez, Jose Emmanuel [Stevens Institute of Technology, Hoboken, NJ 07030 (United States)], E-mail: jmarquez@stevens.edu

    2009-02-15

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples.

  6. Reliability Analysis of a Composite Wind Turbine Blade Section Using the Model Correction Factor Method: Numerical Study and Validation

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian

    2013-01-01

    by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...

  7. Reliability analysis for thermal cutting method based non-explosive separation device

    International Nuclear Information System (INIS)

    Choi, Jun Woo; Hwang, Kuk Ha; Kim, Byung Kyu

    2016-01-01

    In order to increase the reliability of a separation device for a small satellite, a new non-explosive separation device is invented. This device is activated using a thermal cutting method with a Ni-Cr wire. A reliability analysis is carried out for the proposed non-explosive separation device by applying the Fault tree analysis (FTA) method. In the FTA results for the separation device, only ten single-point failure modes are found. The reliability modeling and analysis for the device are performed considering failure of the power supply, the Ni-Cr wire burns failure and unwinds, the holder separation failure, the balls separation failure, and the pin release failure. Ultimately, the reliability of the proposed device is calculated as 0.999989 with five Ni-Cr wire coils

  8. Reliability analysis for thermal cutting method based non-explosive separation device

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Woo; Hwang, Kuk Ha; Kim, Byung Kyu [Korea Aerospace University, Goyang (Korea, Republic of)

    2016-12-15

    In order to increase the reliability of a separation device for a small satellite, a new non-explosive separation device is invented. This device is activated using a thermal cutting method with a Ni-Cr wire. A reliability analysis is carried out for the proposed non-explosive separation device by applying the Fault tree analysis (FTA) method. In the FTA results for the separation device, only ten single-point failure modes are found. The reliability modeling and analysis for the device are performed considering failure of the power supply, the Ni-Cr wire burns failure and unwinds, the holder separation failure, the balls separation failure, and the pin release failure. Ultimately, the reliability of the proposed device is calculated as 0.999989 with five Ni-Cr wire coils.

  9. Limitations in simulator time-based human reliability analysis methods

    International Nuclear Information System (INIS)

    Wreathall, J.

    1989-01-01

    Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical

  10. 78 FR 70356 - Compliance With Order EA-13-109, Order Modifying Licenses With Regard to Reliable Hardened...

    Science.gov (United States)

    2013-11-25

    ... Licenses With Regard to Reliable Hardened Containment Vents Capable of Operation Under Severe Accident... Regard to Reliable Hardened Containment Vents Capable of Operation under Severe Accident Conditions... capable of a operation under severe accident conditions. This ISG also endorses, with clarifications, the...

  11. An exact method for solving logical loops in reliability analysis

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi

    2009-01-01

    This paper presents an exact method for solving logical loops in reliability analysis. The systems that include logical loops are usually described by simultaneous Boolean equations. First, present a basic rule of solving simultaneous Boolean equations. Next, show the analysis procedures for three-component system with external supports. Third, more detailed discussions are given for the establishment of logical loop relation. Finally, take up two typical structures which include more than one logical loop. Their analysis results and corresponding GO-FLOW charts are given. The proposed analytical method is applicable to loop structures that can be described by simultaneous Boolean equations, and it is very useful in evaluating the reliability of complex engineering systems.

  12. COMPOSITE METHOD OF RELIABILITY RESEARCH FOR HIERARCHICAL MULTILAYER ROUTING SYSTEMS

    Directory of Open Access Journals (Sweden)

    R. B. Tregubov

    2016-09-01

    Full Text Available The paper deals with the idea of a research method for hierarchical multilayer routing systems. The method represents a composition of methods of graph theories, reliability, probabilities, etc. These methods are applied to the solution of different private analysis and optimization tasks and are systemically connected and coordinated with each other through uniform set-theoretic representation of the object of research. The hierarchical multilayer routing systems are considered as infrastructure facilities (gas and oil pipelines, automobile and railway networks, systems of power supply and communication with distribution of material resources, energy or information with the use of hierarchically nested functions of routing. For descriptive reasons theoretical constructions are considered on the example of task solution of probability determination for up state of specific infocommunication system. The author showed the possibility of constructive combination of graph representation of structure of the object of research and a logic probable analysis method of its reliability indices through uniform set-theoretic representation of its elements and processes proceeding in them.

  13. Test-retest reliability and task order effects of emotional cognitive tests in healthy subjects.

    Science.gov (United States)

    Adams, Thomas; Pounder, Zoe; Preston, Sally; Hanson, Andy; Gallagher, Peter; Harmer, Catherine J; McAllister-Williams, R Hamish

    2016-11-01

    Little is known of the retest reliability of emotional cognitive tasks or the impact of using different tasks employing similar emotional stimuli within a battery. We investigated this in healthy subjects. We found improved overall performance in an emotional attentional blink task (EABT) with repeat testing at one hour and one week compared to baseline, but the impact of an emotional stimulus on performance was unchanged. Similarly, performance on a facial expression recognition task (FERT) was better one week after a baseline test, though the relative effect of specific emotions was unaltered. There was no effect of repeat testing on an emotional word categorising, recall and recognition task. We found no difference in performance in the FERT and EABT irrespective of task order. We concluded that it is possible to use emotional cognitive tasks in longitudinal studies and combine tasks using emotional facial stimuli in a single battery.

  14. The Evaluation Method of the Lightning Strike on Transmission Lines Aiming at Power Grid Reliability

    Science.gov (United States)

    Wen, Jianfeng; Wu, Jianwei; Huang, Liandong; Geng, Yinan; Yu, zhanqing

    2018-01-01

    Lightning protection of power system focuses on reducing the flashover rate, only distinguishing by the voltage level, without considering the functional differences between the transmission lines, and being lack of analysis the effect on the reliability of power grid. This will lead lightning protection design of general transmission lines is surplus but insufficient for key lines. In order to solve this problem, the analysis method of lightning striking on transmission lines for power grid reliability is given. Full wave process theory is used to analyze the lightning back striking; the leader propagation model is used to describe the process of shielding failure of transmission lines. The index of power grid reliability is introduced and the effect of transmission line fault on the reliability of power system is discussed in detail.

  15. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    Science.gov (United States)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  16. Evaluating the reliability of multi-body mechanisms: A method considering the uncertainties of dynamic performance

    International Nuclear Information System (INIS)

    Wu, Jianing; Yan, Shaoze; Zuo, Ming J.

    2016-01-01

    Mechanism reliability is defined as the ability of a certain mechanism to maintain output accuracy under specified conditions. Mechanism reliability is generally assessed by the classical direct probability method (DPM) derived from the first order second moment (FOSM) method. The DPM relies strongly on the analytical form of the dynamic solution so it is not applicable to multi-body mechanisms that have only numerical solutions. In this paper, an indirect probability model (IPM) is proposed for mechanism reliability evaluation of multi-body mechanisms. IPM combines the dynamic equation, degradation function and Kaplan–Meier estimator to evaluate mechanism reliability comprehensively. Furthermore, to reduce the amount of computation in practical applications, the IPM is simplified into the indirect probability step model (IPSM). A case study of a crank–slider mechanism with clearance is investigated. Results show that relative errors between the theoretical and experimental results of mechanism reliability are less than 5%, demonstrating the effectiveness of the proposed method. - Highlights: • An indirect probability model (IPM) is proposed for mechanism reliability evaluation. • The dynamic equation, degradation function and Kaplan–Meier estimator are used. • Then the simplified form of indirect probability model is proposed. • The experimental results agree well with the predicted results.

  17. Applicability of simplified human reliability analysis methods for severe accidents

    Energy Technology Data Exchange (ETDEWEB)

    Boring, R.; St Germain, S. [Idaho National Lab., Idaho Falls, Idaho (United States); Banaseanu, G.; Chatri, H.; Akl, Y. [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)

    2016-03-15

    Most contemporary human reliability analysis (HRA) methods were created to analyse design-basis accidents at nuclear power plants. As part of a comprehensive expansion of risk assessments at many plants internationally, HRAs will begin considering severe accident scenarios. Severe accidents, while extremely rare, constitute high consequence events that significantly challenge successful operations and recovery. Challenges during severe accidents include degraded and hazardous operating conditions at the plant, the shift in control from the main control room to the technical support center, the unavailability of plant instrumentation, and the need to use different types of operating procedures. Such shifts in operations may also test key assumptions in existing HRA methods. This paper discusses key differences between design basis and severe accidents, reviews efforts to date to create customized HRA methods suitable for severe accidents, and recommends practices for adapting existing HRA methods that are already being used for HRAs at the plants. (author)

  18. Application of system reliability analytical method, GO-FLOW

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi; Fukuto, Junji; Mitomo, Nobuo; Miyazaki, Keiko; Matsukura, Hiroshi; Kobayashi, Michiyuki

    1999-01-01

    The Ship Research Institute proceed a developmental study on GO-FLOW method with various advancing functionalities for the system reliability analysis method occupying main parts of PSA (Probabilistic Safety Assessment). Here was attempted to intend to upgrade functionality of the GO-FLOW method, to develop an analytical function integrated with dynamic behavior analytical function, physical behavior and probable subject transfer, and to prepare a main accident sequence picking-out function. In 1997 fiscal year, in dynamic event-tree analytical system, an analytical function was developed by adding dependency between headings. In simulation analytical function of the accident sequence, main accident sequence of MRX for improved ship propulsion reactor became possible to be covered perfectly. And, input data for analysis was prepared with a function capable easily to set by an analysis operator. (G.K.)

  19. 78 FR 57418 - Compliance With Order EA-13-109, Order Modifying Licenses With Regard to Reliable Hardened...

    Science.gov (United States)

    2013-09-18

    ... Licenses With Regard to Reliable Hardened Containment Vents Capable of Operation Under Severe Accident... Capable of Operation under Severe Accident Conditions.'' (ADAMS Accession No. ML13247A417) This draft JLD... Containment Vents Capable of Operation under Severe Accident Conditions'' (ADAMS Accession No. ML13130A067...

  20. An Investment Level Decision Method to Secure Long-term Reliability

    Science.gov (United States)

    Bamba, Satoshi; Yabe, Kuniaki; Seki, Tomomichi; Shibaya, Tetsuji

    The slowdown in power demand increase and facility replacement causes the aging and lower reliability in power facility. And the aging is followed by the rapid increase of repair and replacement when many facilities reach their lifetime in future. This paper describes a method to estimate the repair and replacement costs in future by applying the life-cycle cost model and renewal theory to the historical data. This paper also describes a method to decide the optimum investment plan, which replaces facilities in the order of cost-effectiveness by setting replacement priority formula, and the minimum investment level to keep the reliability. Estimation examples applied to substation facilities show that the reasonable and leveled future cash-out can keep the reliability by lowering the percentage of replacements caused by fatal failures.

  1. Improvement of human reliability analysis method for PRA

    International Nuclear Information System (INIS)

    Tanji, Junichi; Fujimoto, Haruo

    2013-09-01

    It is required to refine human reliability analysis (HRA) method by, for example, incorporating consideration for the cognitive process of operator into the evaluation of diagnosis errors and decision-making errors, as a part of the development and improvement of methods used in probabilistic risk assessments (PRAs). JNES has been developed a HRA method based on ATHENA which is suitable to handle the structured relationship among diagnosis errors, decision-making errors and operator cognition process. This report summarizes outcomes obtained from the improvement of HRA method, in which enhancement to evaluate how the plant degraded condition affects operator cognitive process and to evaluate human error probabilities (HEPs) which correspond to the contents of operator tasks is made. In addition, this report describes the results of case studies on the representative accident sequences to investigate the applicability of HRA method developed. HEPs of the same accident sequences are also estimated using THERP method, which is most popularly used HRA method, and comparisons of the results obtained using these two methods are made to depict the differences of these methods and issues to be solved. Important conclusions obtained are as follows: (1) Improvement of HRA method using operator cognitive action model. Clarification of factors to be considered in the evaluation of human errors, incorporation of degraded plant safety condition into HRA and investigation of HEPs which are affected by the contents of operator tasks were made to improve the HRA method which can integrate operator cognitive action model into ATHENA method. In addition, the detail of procedures of the improved method was delineated in the form of flowchart. (2) Case studies and comparison with the results evaluated by THERP method. Four operator actions modeled in the PRAs of representative BWR5 and 4-loop PWR plants were selected and evaluated as case studies. These cases were also evaluated using

  2. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  3. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  4. The Matrix Element Method at Next-to-Leading Order

    OpenAIRE

    Campbell, John M.; Giele, Walter T.; Williams, Ciaran

    2012-01-01

    This paper presents an extension of the matrix element method to next-to-leading order in perturbation theory. To accomplish this we have developed a method to calculate next-to-leading order weights on an event-by-event basis. This allows for the definition of next-to-leading order likelihoods in exactly the same fashion as at leading order, thus extending the matrix element method to next-to-leading order. A welcome by-product of the method is the straightforward and efficient generation of...

  5. An Intelligent Method for Structural Reliability Analysis Based on Response Surface

    Institute of Scientific and Technical Information of China (English)

    桂劲松; 刘红; 康海贵

    2004-01-01

    As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surface method is always used because it has a very clear train of thought and simple programming. However, the traditional response surface method fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surface method of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.

  6. Collection of methods for reliability and safety engineering

    International Nuclear Information System (INIS)

    Fussell, J.B.; Rasmuson, D.M.; Wilson, J.R.; Burdick, G.R.; Zipperer, J.C.

    1976-04-01

    The document presented contains five reports each describing a method of reliability and safety engineering. Report I provides a conceptual framework for the study of component malfunctions during system evaluations. Report II provides methods for locating groups of critical component failures such that all the component failures in a given group can be caused to occur by the occurrence of a single separate event. These groups of component failures are called common cause candidates. Report III provides a method for acquiring and storing system-independent component failure logic information. The information stored is influenced by the concepts presented in Report I and also includes information useful in locating common cause candidates. Report IV puts forth methods for analyzing situations that involve systems which change character in a predetermined time sequence. These phased missions techniques are applicable to the hypothetical ''accident chains'' frequently analyzed for nuclear power plants. Report V presents a unified approach to cause-consequence analysis, a method of analysis useful during risk assessments. This approach, as developed by the Danish Atomic Energy Commission, is modified to reflect the format and symbology conventionally used for other types of analysis of nuclear reactor systems

  7. Numerov iteration method for second order integral-differential equation

    International Nuclear Information System (INIS)

    Zeng Fanan; Zhang Jiaju; Zhao Xuan

    1987-01-01

    In this paper, Numerov iterative method for second order integral-differential equation and system of equations are constructed. Numerical examples show that this method is better than direct method (Gauss elimination method) in CPU time and memoy requireing. Therefore, this method is an efficient method for solving integral-differential equation in nuclear physics

  8. A Reliability-Oriented Design Method for Power Electronic Converters

    DEFF Research Database (Denmark)

    Wang, Huai; Zhou, Dao; Blaabjerg, Frede

    2013-01-01

    Reliability is a crucial performance indicator of power electronic systems in terms of availability, mission accomplishment and life cycle cost. A paradigm shift in the research on reliability of power electronics is going on from simple handbook based calculations (e.g. models in MIL-HDBK-217F h...... and reliability prediction models are provided. A case study on a 2.3 MW wind power converter is discussed with emphasis on the reliability critical component IGBT modules....

  9. Precision profiles and analytic reliability of radioimmunologic methods

    International Nuclear Information System (INIS)

    Yaneva, Z.; Popova, Yu.

    1991-01-01

    The aim of the present study is to investigate and compare some methods for creation of 'precision profiles' (PP) and to clarify their possibilities for determining the analytical reliability of RIA. Only methods without complicated mathematical calculations has been used. The reproducibility in serums with a concentration of the determinable hormone in the whole range of the calibration curve has been studied. The radioimmunoassay has been performed with TSH-RIA set (ex East Germany), and comparative evaluations - with commercial sets of HOECHST (Germany) and AMERSHAM (GB). Three methods for obtaining the relationship concentration (IU/l) -reproducibility (C.V.,%) are used and a comparison is made of their corresponding profiles: preliminary rough profile, Rodbard-PP and Ekins-PP. It is concluded that the creation of a precision profile is obligatory and the method of its construction does not influence the relationship's course. PP allows to determine concentration range giving stable results which improves the efficiency of the analitical work. 16 refs., 4 figs

  10. Monte Carlo methods for the reliability analysis of Markov systems

    International Nuclear Information System (INIS)

    Buslik, A.J.

    1985-01-01

    This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator

  11. New Efficient Fourth Order Method for Solving Nonlinear Equations

    Directory of Open Access Journals (Sweden)

    Farooq Ahmad

    2013-12-01

    Full Text Available In a paper [Appl. Math. Comput., 188 (2 (2007 1587--1591], authors have suggested and analyzed a method for solving nonlinear equations. In the present work, we modified this method by using the finite difference scheme, which has a quintic convergence. We have compared this modified Halley method with some other iterative of fifth-orders convergence methods, which shows that this new method having convergence of fourth order, is efficient.

  12. A Generalized Runge-Kutta Method of order three

    DEFF Research Database (Denmark)

    Thomsen, Per Grove

    2002-01-01

    The report presents a numerical method for the solution of stiff systems of ODE's and index one DAE's. The type of method is a 4- stage Generalized Linear Method that is reformulated in a special Semi Implicit Runge Kutta Method of SDIRK type. Error estimation is by imbedding a method of order 4...... based on the same stages as the method and the coefficients are selected for ease of implementation. The method has 4 stages and the stage-order is 2. For purposes of generating dense output and for initializing the iteration in the internal stages a continuous extension is derived. The method is A......-stable and we present the region of absolute stability and the order star of the order 3 method that is used for computing the solution....

  13. Reliability of Semiautomated Computational Methods for Estimating Tibiofemoral Contact Stress in the Multicenter Osteoarthritis Study

    Directory of Open Access Journals (Sweden)

    Donald D. Anderson

    2012-01-01

    Full Text Available Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs. The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93–0.99 and good inter-rater reliability (0.84–0.97. This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.

  14. Field Method for Integrating the First Order Differential Equation

    Institute of Scientific and Technical Information of China (English)

    JIA Li-qun; ZHENG Shi-wang; ZHANG Yao-yu

    2007-01-01

    An important modern method in analytical mechanics for finding the integral, which is called the field-method, is used to research the solution of a differential equation of the first order. First, by introducing an intermediate variable, a more complicated differential equation of the first order can be expressed by two simple differential equations of the first order, then the field-method in analytical mechanics is introduced for solving the two differential equations of the first order. The conclusion shows that the field-method in analytical mechanics can be fully used to find the solutions of a differential equation of the first order, thus a new method for finding the solutions of the first order is provided.

  15. Development of reliability centered maintenance methods and tools

    International Nuclear Information System (INIS)

    Jacquot, J.P.; Dubreuil-Chambardel, A.; Lannoy, A.; Monnier, B.

    1992-12-01

    This paper recalls the development of the RCM (Reliability Centered Maintenance) approach in the nuclear industry and describes the trial study implemented by EDF in the context of the OMF (RCM) Project. The approach developed is currently being applied to about thirty systems (Industrial Project). On a parallel, R and D efforts are being maintained to improve the selectivity of the analysis methods. These methods use Probabilistic Safety Study models, thereby guaranteeing better selectivity in the identification of safety critical elements and enhancing consistency between Maintenance and Safety studies. They also offer more detailed analysis of operation feedback, invoking for example Bayes' methods combining expert judgement and feedback data. Finally, they propose a functional and material representation of the plant. This dual representation describes both the functions assured by maintenance provisions and the material elements required for their implementation. In the final chapter, the targets of the future OMF workstation are summarized and the latter's insertion in the EDF information system is briefly described. (authors). 5 figs., 2 tabs., 7 refs

  16. AK-SYS: An adaptation of the AK-MCS method for system reliability

    International Nuclear Information System (INIS)

    Fauriat, W.; Gayton, N.

    2014-01-01

    A lot of research work has been proposed over the last two decades to evaluate the probability of failure of a structure involving a very time-consuming mechanical model. Surrogate model approaches based on Kriging, such as the Efficient Global Reliability Analysis (EGRA) or the Active learning and Kriging-based Monte-Carlo Simulation (AK-MCS) methods, are very efficient and each has advantages of its own. EGRA is well suited to evaluating small probabilities, as the surrogate can be used to classify any population. AK-MCS is built in relation to a given population and requires no optimization program for the active learning procedure to be performed. It is therefore easier to implement and more likely to spend computational effort on areas with a significant probability content. When assessing system reliability, analytical approaches and first-order approximation are widely used in the literature. However, in the present paper we rather focus on sampling techniques and, considering the recent adaptation of the EGRA method for systems, a strategy is presented to adapt the AK-MCS method for system reliability. The AK-SYS method, “Active learning and Kriging-based SYStem reliability method”, is presented. Its high efficiency and accuracy are illustrated via various examples

  17. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    Energy Technology Data Exchange (ETDEWEB)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia; Grelle, Austin

    2015-04-26

    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), a systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.

  18. The psychophysiological assessment method for pilot's professional reliability.

    Science.gov (United States)

    Zhang, L M; Yu, L S; Wang, K N; Jing, B S; Fang, C

    1997-05-01

    Previous research has shown that a pilot's professional reliability depends on two relative factors: the pilot's functional state and the demands of task workload. The Psychophysiological Reserve Capacity (PRC) is defined as a pilot's ability to accomplish additive tasks without reducing the performance of the primary task (flight task). We hypothesized that the PRC was a mirror of the pilot's functional state. The purpose of this study was to probe the psychophysiological method for evaluating a pilot's professional reliability on a simulator. The PRC Comprehensive Evaluating System (PRCCES) which was used in the experiment included four subsystems: a) quantitative evaluation system for pilot's performance on simulator; b) secondary task display and quantitative estimating system; c) multiphysiological data monitoring and statistical system; and d) comprehensive evaluation system for pilot PRC. Two studies were performed. In study one, 63 healthy and 13 hospitalized pilots participated. Each pilot performed a double 180 degrees circuit flight program with and without secondary task (three digit operation). The operator performance, score of secondary task and cost of physiological effort were measured and compared by PRCCES in the two conditions. Then, each pilot's flight skill in training was subjectively scored by instructor pilot ratings. In study two, 7 healthy pilots volunteered to take part in the experiment on the effects of sleep deprivation on pilot's PRC. Each participant had PRC tested pre- and post-8 h sleep deprivation. The results show that the PRC values of a healthy pilot was positively correlated with abilities of flexibility, operating and correcting deviation, attention distribution, and accuracy of instrument flight in the air (r = 0.27-0.40, p < 0.05), and negatively correlated with emotional anxiety in flight (r = -0.40, p < 0.05). The values of PRC in healthy pilots (0.61 +/- 0.17) were significantly higher than that of hospitalized pilots

  19. Reliability and applications of statistical methods based on oligonucleotide frequencies in bacterial and archaeal genomes

    DEFF Research Database (Denmark)

    Bohlin, J; Skjerve, E; Ussery, David

    2008-01-01

    with here are mainly used to examine similarities between archaeal and bacterial DNA from different genomes. These methods compare observed genomic frequencies of fixed-sized oligonucleotides with expected values, which can be determined by genomic nucleotide content, smaller oligonucleotide frequencies......, or be based on specific statistical distributions. Advantages with these statistical methods include measurements of phylogenetic relationship with relatively small pieces of DNA sampled from almost anywhere within genomes, detection of foreign/conserved DNA, and homology searches. Our aim was to explore...... the reliability and best suited applications for some popular methods, which include relative oligonucleotide frequencies (ROF), di- to hexanucleotide zero'th order Markov methods (ZOM) and 2.order Markov chain Method (MCM). Tests were performed on distant homology searches with large DNA sequences, detection...

  20. A Novel Evaluation Method for Building Construction Project Based on Integrated Information Entropy with Reliability Theory

    Directory of Open Access Journals (Sweden)

    Xiao-ping Bai

    2013-01-01

    Full Text Available Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  1. A novel evaluation method for building construction project based on integrated information entropy with reliability theory.

    Science.gov (United States)

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  2. Cheap arbitrary high order methods for single integrand SDEs

    DEFF Research Database (Denmark)

    Debrabant, Kristian; Kværnø, Anne

    2017-01-01

    For a particular class of Stratonovich SDE problems, here denoted as single integrand SDEs, we prove that by applying a deterministic Runge-Kutta method of order $p_d$ we obtain methods converging in the mean-square and weak sense with order $\\lfloor p_d/2\\rfloor$. The reason is that the B-series...

  3. Higher-Order Integral Equation Methods in Computational Electromagnetics

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Meincke, Peter

    Higher-order integral equation methods have been investigated. The study has focused on improving the accuracy and efficiency of the Method of Moments (MoM) applied to electromagnetic problems. A new set of hierarchical Legendre basis functions of arbitrary order is developed. The new basis...

  4. Risk-based methods for reliability investments in electric power distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Alvehag, Karin

    2011-07-01

    Society relies more and more on a continuous supply of electricity. However, while under investments in reliability lead to an unacceptable number of power interruptions, over investments result in too high costs for society. To give incentives for a socio economically optimal level of reliability, quality regulations have been adopted in many European countries. These quality regulations imply new financial risks for the distribution system operator (DSO) since poor reliability can reduce the allowed revenue for the DSO and compensation may have to be paid to affected customers. This thesis develops a method for evaluating the incentives for reliability investments implied by different quality regulation designs. The method can be used to investigate whether socio economically beneficial projects are also beneficial for a profit-maximizing DSO subject to a particular quality regulation design. To investigate which reinvestment projects are preferable for society and a DSO, risk-based methods are developed. With these methods, the probability of power interruptions and the consequences of these can be simulated. The consequences of interruptions for the DSO will to a large extent depend on the quality regulation. The consequences for the customers, and hence also society, will depend on factors such as the interruption duration and time of occurrence. The proposed risk-based methods consider extreme outage events in the risk assessments by incorporating the impact of severe weather, estimating the full probability distribution of the total reliability cost, and formulating a risk-averse strategy. Results from case studies performed show that quality regulation design has a significant impact on reinvestment project profitability for a DSO. In order to adequately capture the financial risk that the DSO is exposed to, detailed riskbased methods, such as the ones developed in this thesis, are needed. Furthermore, when making investment decisions, a risk

  5. Development on methods for evaluating structure reliability of piping components

    International Nuclear Information System (INIS)

    Schimpfke, T.; Grebner, H.; Peschke, J.; Sievers, J.

    2003-01-01

    In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour, GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The development is based on the experience achieved with applications of the public available US code PRAISE 3.10 (Piping Reliability Analysis Including Seismic Events), which was supplemented by additional features regarding the statistical evaluation and the crack orientation. PROST is designed to be more flexible to changes and supplementations. Up to now it can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents a parametric study on the influence by changing the method of stress intensity factor and limit load calculation and the statistical evaluation options on the leak probability of an exemplary pipe with postulated axial crack distribution. Furthermore the resulting leak probability of an exemplary pipe with postulated circumferential crack distribution is compared with the results of the modified PRAISE computer program. The intention of this investigation is to show trends. Therefore the resulting absolute values for probabilities should not be considered as realistic evaluations. (author)

  6. Methods for qualification of highly reliable software - international procedure

    International Nuclear Information System (INIS)

    Kersken, M.

    1997-01-01

    Despite the advantages of computer-assisted safety technology, there still is some uneasyness to be observed with respect to the novel processes, resulting from absence of a body of generally accepted and uncontentious qualification guides (regulatory provisions, standards) for safety evaluation of the computer codes applied. Warranty of adequate protection of the population, operators or plant components is an essential aspect in this context, too - as it is in general with reliability and risk assessment of novel technology - so that, due to appropriate legislation still missing, there currently is a licensing risk involved in the introduction of digital safety systems. Nevertheless, there is some extent of agreement within the international community and utility operators about what standards and measures should be applied for qualification of software of relevance to plant safety. The standard IEC 880/IEC 86/ in particular, in its original version, or national documents based on this standard, are applied in all countries using or planning to install those systems. A novel supplement to this standard, document /IEC 96/, is in the process of finalization and defines the requirements to be met by modern methods of software engineering. (orig./DG) [de

  7. An application of characteristic function in order to predict reliability and lifetime of aeronautical hardware

    Energy Technology Data Exchange (ETDEWEB)

    Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz [Air Force Institute of Technology ul. Księcia Bolesława 6 01-494 Warsaw (Poland)

    2016-06-08

    The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime of aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment’s reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.

  8. An application of characteristic function in order to predict reliability and lifetime of aeronautical hardware

    International Nuclear Information System (INIS)

    Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz

    2016-01-01

    The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime of aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment’s reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.

  9. Can the second order multireference perturbation theory be considered a reliable tool to study mixed-valence compounds?

    Science.gov (United States)

    Pastore, Mariachiara; Helal, Wissam; Evangelisti, Stefano; Leininger, Thierry; Malrieu, Jean-Paul; Maynau, Daniel; Angeli, Celestino; Cimiraglia, Renzo

    2008-05-07

    In this paper, the problem of the calculation of the electronic structure of mixed-valence compounds is addressed in the frame of multireference perturbation theory (MRPT). Using a simple mixed-valence compound (the 5,5(') (4H,4H('))-spirobi[ciclopenta[c]pyrrole] 2,2('),6,6(') tetrahydro cation), and the n-electron valence state perturbation theory (NEVPT2) and CASPT2 approaches, it is shown that the ground state (GS) energy curve presents an unphysical "well" for nuclear coordinates close to the symmetric case, where a maximum is expected. For NEVPT, the correct shape of the energy curve is retrieved by applying the MPRT at the (computationally expensive) third order. This behavior is rationalized using a simple model (the ionized GS of two weakly interacting identical systems, each neutral system being described by two electrons in two orbitals), showing that the unphysical well is due to the canonical orbital energies which at the symmetric (delocalized) conformation lead to a sudden modification of the denominators in the perturbation expansion. In this model, the bias introduced in the second order correction to the energy is almost entirely removed going to the third order. With the results of the model in mind, one can predict that all MRPT methods in which the zero order Hamiltonian is based on canonical orbital energies are prone to present unreasonable energy profiles close to the symmetric situation. However, the model allows a strategy to be devised which can give a correct behavior even at the second order, by simply averaging the orbital energies of the two charge-localized electronic states. Such a strategy is adopted in a NEVPT2 scheme obtaining a good agreement with the third order results based on the canonical orbital energies. The answer to the question reported in the title (is this theoretical approach a reliable tool for a correct description of these systems?) is therefore positive, but care must be exercised, either in defining the orbital

  10. Method to render second order beam optics programs symplectic

    International Nuclear Information System (INIS)

    Douglas, D.; Servranckx, R.V.

    1984-10-01

    We present evidence that second order matrix-based beam optics programs violate the symplectic condition. A simple method to avoid this difficulty, based on a generating function approach to evaluating transfer maps, is described. A simple example illustrating the non-symplectricity of second order matrix methods, and the effectiveness of our solution to the problem, is provided. We conclude that it is in fact possible to bring second order matrix optics methods to a canonical form. The procedure for doing so has been implemented in the program DIMAT, and could be implemented in programs such as TRANSPORT and TURTLE, making them useful in multiturn applications. 15 refs

  11. Safety and reliability analysis based on nonprobabilistic methods

    International Nuclear Information System (INIS)

    Kozin, I.O.; Petersen, K.E.

    1996-01-01

    Imprecise probabilities, being developed during the last two decades, offer a considerably more general theory having many advantages which make it very promising for reliability and safety analysis. The objective of the paper is to argue that imprecise probabilities are more appropriate tool for reliability and safety analysis, that they allow to model the behavior of nuclear industry objects more comprehensively and give a possibility to solve some problems unsolved in the framework of conventional approach. Furthermore, some specific examples are given from which we can see the usefulness of the tool for solving some reliability tasks

  12. Review of methods for the integration of reliability and design engineering

    International Nuclear Information System (INIS)

    Reilly, J.T.

    1978-03-01

    A review of methods for the integration of reliability and design engineering was carried out to establish a reliability program philosophy, an initial set of methods, and procedures to be used by both the designer and reliability analyst. The report outlines a set of procedures which implements a philosophy that requires increased involvement by the designer in reliability analysis. Discussions of each method reviewed include examples of its application

  13. Method of core thermodynamic reliability determination in pressurized water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ackermann, G.; Horche, W. (Ingenieurhochschule Zittau (German Democratic Republic). Sektion Kraftwerksanlagenbau und Energieumwandlung)

    1983-01-01

    A statistical model appropriate to determine the thermodynamic reliability and the power-limiting parameter of PWR cores is described for cases of accidental transients. The model is compared with the hot channel model hitherto applied.

  14. Method of core thermodynamic reliability determination in pressurized water reactors

    International Nuclear Information System (INIS)

    Ackermann, G.; Horche, W.

    1983-01-01

    A statistical model appropriate to determine the thermodynamic reliability and the power-limiting parameter of PWR cores is described for cases of accidental transients. The model is compared with the hot channel model hitherto applied. (author)

  15. Second-Order Learning Methods for a Multilayer Perceptron

    International Nuclear Information System (INIS)

    Ivanov, V.V.; Purehvdorzh, B.; Puzynin, I.V.

    1994-01-01

    First- and second-order learning methods for feed-forward multilayer neural networks are studied. Newton-type and quasi-Newton algorithms are considered and compared with commonly used back-propagation algorithm. It is shown that, although second-order algorithms require enhanced computer facilities, they provide better convergence and simplicity in usage. 13 refs., 2 figs., 2 tabs

  16. Structural Reliability Methods for Wind Power Converter System Component Reliability Assessment

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Wind power converter systems are essential subsystems in both off-shore and on-shore wind turbines. It is the main interface between generator and grid connection. This system is affected by numerous stresses where the main contributors might be defined as vibration and temperature loadings....... The temperature variations induce time-varying stresses and thereby fatigue loads. A probabilistic model is used to model fatigue failure for an electrical component in the power converter system. This model is based on a linear damage accumulation and physics of failure approaches, where a failure criterion...... is defined by the threshold model. The attention is focused on crack propagation in solder joints of electrical components due to the temperature loadings. Structural Reliability approaches are used to incorporate model, physical and statistical uncertainties. Reliability estimation by means of structural...

  17. Characteristics and application study of AP1000 NPPs equipment reliability classification method

    International Nuclear Information System (INIS)

    Guan Gao

    2013-01-01

    AP1000 nuclear power plant applies an integrated approach to establish equipment reliability classification, which includes probabilistic risk assessment technique, maintenance rule administrative, power production reliability classification and functional equipment group bounding method, and eventually classify equipment reliability into 4 levels. This classification process and result are very different from classical RCM and streamlined RCM. It studied the characteristic of AP1000 equipment reliability classification approach, considered that equipment reliability classification should effectively support maintenance strategy development and work process control, recommended to use a combined RCM method to establish the future equipment reliability program of AP1000 nuclear power plants. (authors)

  18. Calculation of the reliability of large complex systems by the relevant path method

    International Nuclear Information System (INIS)

    Richter, G.

    1975-03-01

    In this paper, analytical methods are presented and tested with which the probabilistic reliability data of technical systems can be determined for given fault trees and block diagrams and known reliability data of the components. (orig./AK) [de

  19. Reliability of a semi-quantitative method for dermal exposure assessment (DREAM)

    NARCIS (Netherlands)

    Wendel de Joode, B. van; Hemmen, J.J. van; Meijster, T.; Major, V.; London, L.; Kromhout, H.

    2005-01-01

    Valid and reliable semi-quantitative dermal exposure assessment methods for epidemiological research and for occupational hygiene practice, applicable for different chemical agents, are practically nonexistent. The aim of this study was to assess the reliability of a recently developed

  20. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    Science.gov (United States)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  1. A Reliability Assessment Method for the VHTR Safety Systems

    International Nuclear Information System (INIS)

    Lee, Hyung Sok; Jae, Moo Sung; Kim, Yong Wan

    2011-01-01

    The Passive safety system by very high temperature reactor which has attracted worldwide attention in the last century is the reliability safety system introduced for the improvement in the safety of the next generation nuclear power plant design. The Passive system functionality does not rely on an external source of energy, but on an intelligent use of the natural phenomena, such as gravity, conduction and radiation, which are always present. Because of these features, it is difficult to evaluate the passive safety on the risk analysis methodology having considered the existing active system failure. Therefore new reliability methodology has to be considered. In this study, the preliminary evaluation and conceptualization are tried, applying the concept of the load and capacity from the reliability physics model, designing the new passive system analysis methodology, and the trial applying to paper plant.

  2. A Hierarchical Reliability Control Method for a Space Manipulator Based on the Strategy of Autonomous Decision-Making

    Directory of Open Access Journals (Sweden)

    Xin Gao

    2016-01-01

    Full Text Available In order to maintain and enhance the operational reliability of a robotic manipulator deployed in space, an operational reliability system control method is presented in this paper. First, a method to divide factors affecting the operational reliability is proposed, which divides the operational reliability factors into task-related factors and cost-related factors. Then the models describing the relationships between the two kinds of factors and control variables are established. Based on this, a multivariable and multiconstraint optimization model is constructed. Second, a hierarchical system control model which incorporates the operational reliability factors is constructed. The control process of the space manipulator is divided into three layers: task planning, path planning, and motion control. Operational reliability related performance parameters are measured and used as the system’s feedback. Taking the factors affecting the operational reliability into consideration, the system can autonomously decide which control layer of the system should be optimized and how to optimize it using a control level adjustment decision module. The operational reliability factors affect these three control levels in the form of control variable constraints. Simulation results demonstrate that the proposed method can achieve a greater probability of meeting the task accuracy requirements, while extending the expected lifetime of the space manipulator.

  3. Numerical methods of higher order of accuracy for incompressible flows

    Czech Academy of Sciences Publication Activity Database

    Kozel, K.; Louda, Petr; Příhoda, Jaromír

    2010-01-01

    Roč. 80, č. 8 (2010), s. 1734-1745 ISSN 0378-4754 Institutional research plan: CEZ:AV0Z20760514 Keywords : higher order methods * upwind methods * backward-facing step Subject RIV: BK - Fluid Dynamics Impact factor: 0.812, year: 2010

  4. A high-order SPH method by introducing inverse kernels

    Directory of Open Access Journals (Sweden)

    Le Fang

    2017-02-01

    Full Text Available The smoothed particle hydrodynamics (SPH method is usually expected to be an efficient numerical tool for calculating the fluid-structure interactions in compressors; however, an endogenetic restriction is the problem of low-order consistency. A high-order SPH method by introducing inverse kernels, which is quite easy to be implemented but efficient, is proposed for solving this restriction. The basic inverse method and the special treatment near boundary are introduced with also the discussion of the combination of the Least-Square (LS and Moving-Least-Square (MLS methods. Then detailed analysis in spectral space is presented for people to better understand this method. Finally we show three test examples to verify the method behavior.

  5. Reliability-Based Shape Optimization using Stochastic Finite Element Methods

    DEFF Research Database (Denmark)

    Enevoldsen, Ib; Sørensen, John Dalsgaard; Sigurdsson, G.

    1991-01-01

    stochastic fields (e.g. loads and material parameters such as Young's modulus and the Poisson ratio). In this case stochastic finite element techniques combined with FORM analysis can be used to obtain measures of the reliability of the structural systems, see Der Kiureghian & Ke (6) and Liu & Der Kiureghian...

  6. Survey of industry methods for producing highly reliable software

    International Nuclear Information System (INIS)

    Lawrence, J.D.; Persons, W.L.

    1994-11-01

    The Nuclear Reactor Regulation Office of the US Nuclear Regulatory Commission is charged with assessing the safety of new instrument and control designs for nuclear power plants which may use computer-based reactor protection systems. Lawrence Livermore National Laboratory has evaluated the latest techniques in software reliability for measurement, estimation, error detection, and prediction that can be used during the software life cycle as a means of risk assessment for reactor protection systems. One aspect of this task has been a survey of the software industry to collect information to help identify the design factors used to improve the reliability and safety of software. The intent was to discover what practices really work in industry and what design factors are used by industry to achieve highly reliable software. The results of the survey are documented in this report. Three companies participated in the survey: Computer Sciences Corporation, International Business Machines (Federal Systems Company), and TRW. Discussions were also held with NASA Software Engineering Lab/University of Maryland/CSC, and the AIAA Software Reliability Project

  7. Matrix-based system reliability method and applications to bridge networks

    International Nuclear Information System (INIS)

    Kang, W.-H.; Song Junho; Gardoni, Paolo

    2008-01-01

    Using a matrix-based system reliability (MSR) method, one can estimate the probabilities of complex system events by simple matrix calculations. Unlike existing system reliability methods whose complexity depends highly on that of the system event, the MSR method describes any general system event in a simple matrix form and therefore provides a more convenient way of handling the system event and estimating its probability. Even in the case where one has incomplete information on the component probabilities and/or the statistical dependence thereof, the matrix-based framework enables us to estimate the narrowest bounds on the system failure probability by linear programming. This paper presents the MSR method and applies it to a transportation network consisting of bridge structures. The seismic failure probabilities of bridges are estimated by use of the predictive fragility curves developed by a Bayesian methodology based on experimental data and existing deterministic models of the seismic capacity and demand. Using the MSR method, the probability of disconnection between each city/county and a critical facility is estimated. The probability mass function of the number of failed bridges is computed as well. In order to quantify the relative importance of bridges, the MSR method is used to compute the conditional probabilities of bridge failures given that there is at least one city disconnected from the critical facility. The bounds on the probability of disconnection are also obtained for cases with incomplete information

  8. Semiorders, Intervals Orders and Pseudo Orders Preference Structures in Multiple Criteria Decision Aid Methods

    Directory of Open Access Journals (Sweden)

    Fernández Barberis, Gabriela

    2013-06-01

    Full Text Available During the last decades, an important number of Multicriteria Decision Aid Methods (MCDA has been proposed to help the decision maker to select the best compromise alternative. Meanwhile, the PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluations family of outranking method and their applications has attracted much attention from academics and practitioners. In this paper, an extension of these methods is presented, consisting of analyze its functioning under New Preference Structures (NPS. The preference structures taken into account are, namely: semiorders, intervals orders and pseudo orders. These structures outstandingly improve the modelization as they give more flexibility, amplitude and certainty at the preferences formulation, since they tend to abandon the Complete Transitive Comparability Axiom of Preferences in order to substitute it by the Partial Comparability Axiom of Preferences. It must be remarked the introduction of Incomparability relations to the analysis and the consideration of preference structures that accept the Indifference intransitivity. The NPS incorporation is carried out in three phases that the PROMETHEE Methodology takes in: preference structure enrichment, dominance relation enrichment and outranking relation exploitation for decision aid, in order to finally arrive at solving the alternatives ranking problem through the PROMETHEE I or the PROMETHEE II utilization, according to whether a partial ranking or a complete one, is respectively required under the NPS

  9. Efficiency of High Order Spectral Element Methods on Petascale Architectures

    KAUST Repository

    Hutchinson, Maxwell; Heinecke, Alexander; Pabst, Hans; Henry, Greg; Parsani, Matteo; Keyes, David E.

    2016-01-01

    High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.

  10. Efficiency of High Order Spectral Element Methods on Petascale Architectures

    KAUST Repository

    Hutchinson, Maxwell

    2016-06-14

    High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.

  11. Multilevel Fast Multipole Method for Higher Order Discretizations

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Meincke, Peter; Jorgensen, Erik

    2014-01-01

    The multi-level fast multipole method (MLFMM) for a higher order (HO) discretization is demonstrated on high-frequency (HF) problems, illustrating for the first time how an efficient MLFMM for HO can be achieved even for very large groups. Applying several novel ideas, beneficial to both lower...... order and higher order discretizations, results from a low-memory, high-speed MLFMM implementation of a HO hierarchical discretization are shown. These results challenge the general view that the benefits of HO and HF-MLFMM cannot be combined....

  12. Advancing methods for reliably assessing motivational interviewing fidelity using the motivational interviewing skills code.

    Science.gov (United States)

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C

    2015-02-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Usefulness of the Monte Carlo method in reliability calculations

    International Nuclear Information System (INIS)

    Lanore, J.M.; Kalli, H.

    1977-01-01

    Three examples of reliability Monte Carlo programs developed in the LEP (Laboratory for Radiation Shielding Studies in the Nuclear Research Center at Saclay) are presented. First, an uncertainty analysis is given for a simplified spray system; a Monte Carlo program PATREC-MC has been written to solve the problem with the system components given in the fault tree representation. The second program MONARC 2 has been written to solve the problem of complex systems reliability by the Monte Carlo simulation, here again the system (a residual heat removal system) is in the fault tree representation. Third, the Monte Carlo program MONARC was used instead of the Markov diagram to solve the simulation problem of an electric power supply including two nets and two stand-by diesels

  14. Reliability improvement methods for sapphire fiber temperature sensors

    Science.gov (United States)

    Schietinger, C.; Adams, B.

    1991-08-01

    Mechanical, optical, electrical, and software design improvements can be brought to bear in the enhancement of fiber-optic sapphire-fiber temperature measurement tool reliability in harsh environments. The optical fiber thermometry (OFT) equipment discussed is used in numerous process industries and generally involves a sapphire sensor, an optical transmission cable, and a microprocessor-based signal analyzer. OFT technology incorporating sensors for corrosive environments, hybrid sensors, and two-wavelength measurements, are discussed.

  15. Vibration of carbon nanotubes with defects: order reduction methods

    Science.gov (United States)

    Hudson, Robert B.; Sinha, Alok

    2018-03-01

    Order reduction methods are widely used to reduce computational effort when calculating the impact of defects on the vibrational properties of nearly periodic structures in engineering applications, such as a gas-turbine bladed disc. However, despite obvious similarities these techniques have not yet been adapted for use in analysing atomic structures with inevitable defects. Two order reduction techniques, modal domain analysis and modified modal domain analysis, are successfully used in this paper to examine the changes in vibrational frequencies, mode shapes and mode localization caused by defects in carbon nanotubes. The defects considered are isotope defects and Stone-Wales defects, though the methods described can be extended to other defects.

  16. Study on Performance Shaping Factors (PSFs) Quantification Method in Human Reliability Analysis (HRA)

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Jang, Inseok Jang; Seong, Poong Hyun; Park, Jinkyun; Kim, Jong Hyun

    2015-01-01

    The purpose of HRA implementation is 1) to achieve the human factor engineering (HFE) design goal of providing operator interfaces that will minimize personnel errors and 2) to conduct an integrated activity to support probabilistic risk assessment (PRA). For these purposes, various HRA methods have been developed such as technique for human error rate prediction (THERP), simplified plant analysis risk human reliability assessment (SPAR-H), cognitive reliability and error analysis method (CREAM) and so on. In performing HRA, such conditions that influence human performances have been represented via several context factors called performance shaping factors (PSFs). PSFs are aspects of the human's individual characteristics, environment, organization, or task that specifically decrements or improves human performance, thus respectively increasing or decreasing the likelihood of human errors. Most HRA methods evaluate the weightings of PSFs by expert judgment and explicit guidance for evaluating the weighting is not provided. It has been widely known that the performance of the human operator is one of the critical factors to determine the safe operation of NPPs. HRA methods have been developed to identify the possibility and mechanism of human errors. In performing HRA methods, the effect of PSFs which may increase or decrease human error should be investigated. However, the effect of PSFs were estimated by expert judgment so far. Accordingly, in order to estimate the effect of PSFs objectively, the quantitative framework to estimate PSFs by using PSF profiles is introduced in this paper

  17. High order spectral difference lattice Boltzmann method for incompressible hydrodynamics

    Science.gov (United States)

    Li, Weidong

    2017-09-01

    This work presents a lattice Boltzmann equation (LBE) based high order spectral difference method for incompressible flows. In the present method, the spectral difference (SD) method is adopted to discretize the convection and collision term of the LBE to obtain high order (≥3) accuracy. Because the SD scheme represents the solution as cell local polynomials and the solution polynomials have good tensor-product property, the present spectral difference lattice Boltzmann method (SD-LBM) can be implemented on arbitrary unstructured quadrilateral meshes for effective and efficient treatment of complex geometries. Thanks to only first oder PDEs involved in the LBE, no special techniques, such as hybridizable discontinuous Galerkin method (HDG), local discontinuous Galerkin method (LDG) and so on, are needed to discrete diffusion term, and thus, it simplifies the algorithm and implementation of the high order spectral difference method for simulating viscous flows. The proposed SD-LBM is validated with four incompressible flow benchmarks in two-dimensions: (a) the Poiseuille flow driven by a constant body force; (b) the lid-driven cavity flow without singularity at the two top corners-Burggraf flow; and (c) the unsteady Taylor-Green vortex flow; (d) the Blasius boundary-layer flow past a flat plate. Computational results are compared with analytical solutions of these cases and convergence studies of these cases are also given. The designed accuracy of the proposed SD-LBM is clearly verified.

  18. New high order FDTD method to solve EMC problems

    Directory of Open Access Journals (Sweden)

    N. Deymier

    2015-10-01

    Full Text Available In electromagnetic compatibility (EMC context, we are interested in developing new ac- curate methods to solve efficiently and accurately Maxwell’s equations in the time domain. Indeed, usual methods such as FDTD or FVTD present im- portant dissipative and/or dispersive errors which prevent to obtain a good numerical approximation of the physical solution for a given industrial scene unless we use a mesh with a very small cell size. To avoid this problem, schemes like the Discontinuous Galerkin (DG method, based on higher order spa- tial approximations, have been introduced and stud- ied on unstructured meshes. However the cost of this kind of method can become prohibitive accord- ing to the mesh used. In this paper, we first present a higher order spatial approximation method on carte- sian meshes. It is based on a finite element ap- proach and recovers at the order 1 the well-known Yee’s schema. Next, to deal with EMC problem, a non-oriented thin wire formalism is proposed for this method. Finally, several examples are given to present the benefits of this new method by compar- ison with both Yee’s schema and DG approaches.

  19. Hybrid RANS-LES using high order numerical methods

    Science.gov (United States)

    Henry de Frahan, Marc; Yellapantula, Shashank; Vijayakumar, Ganesh; Knaus, Robert; Sprague, Michael

    2017-11-01

    Understanding the impact of wind turbine wake dynamics on downstream turbines is particularly important for the design of efficient wind farms. Due to their tractable computational cost, hybrid RANS/LES models are an attractive framework for simulating separation flows such as the wake dynamics behind a wind turbine. High-order numerical methods can be computationally efficient and provide increased accuracy in simulating complex flows. In the context of LES, high-order numerical methods have shown some success in predictions of turbulent flows. However, the specifics of hybrid RANS-LES models, including the transition region between both modeling frameworks, pose unique challenges for high-order numerical methods. In this work, we study the effect of increasing the order of accuracy of the numerical scheme in simulations of canonical turbulent flows using RANS, LES, and hybrid RANS-LES models. We describe the interactions between filtering, model transition, and order of accuracy and their effect on turbulence quantities such as kinetic energy spectra, boundary layer evolution, and dissipation rate. This work was funded by the U.S. Department of Energy, Exascale Computing Project, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.

  20. Response and reliability analysis of nonlinear uncertain dynamical structures by the probability density evolution method

    DEFF Research Database (Denmark)

    Nielsen, Søren R. K.; Peng, Yongbo; Sichani, Mahdi Teimouri

    2016-01-01

    The paper deals with the response and reliability analysis of hysteretic or geometric nonlinear uncertain dynamical systems of arbitrary dimensionality driven by stochastic processes. The approach is based on the probability density evolution method proposed by Li and Chen (Stochastic dynamics...... of structures, 1st edn. Wiley, London, 2009; Probab Eng Mech 20(1):33–44, 2005), which circumvents the dimensional curse of traditional methods for the determination of non-stationary probability densities based on Markov process assumptions and the numerical solution of the related Fokker–Planck and Kolmogorov......–Feller equations. The main obstacle of the method is that a multi-dimensional convolution integral needs to be carried out over the sample space of a set of basic random variables, for which reason the number of these need to be relatively low. In order to handle this problem an approach is suggested, which...

  1. Calculation of noninformative prior of reliability parameter and initiating event frequency with Jeffreys method

    International Nuclear Information System (INIS)

    He Jie; Zhang Binbin

    2013-01-01

    In the probabilistic safety assessment (PSA) of nuclear power plants, there are few historical records on some initiating event frequencies or component failures in industry. In order to determine the noninformative priors of such reliability parameters and initiating event frequencies, the Jeffreys method in Bayesian statistics was employed. The mathematical mechanism of the Jeffreys prior and the simplified constrained noninformative distribution (SCNID) were elaborated in this paper. The Jeffreys noninformative formulas and the credible intervals of the Gamma-Poisson and Beta-Binomial models were introduced. As an example, the small break loss-of-coolant accident (SLOCA) was employed to show the application of the Jeffreys prior in determining an initiating event frequency. The result shows that the Jeffreys method is an effective method for noninformative prior calculation. (authors)

  2. Robust fractional order differentiators using generalized modulating functions method

    KAUST Repository

    Liu, Dayan; Laleg-Kirati, Taous-Meriem

    2015-01-01

    This paper aims at designing a fractional order differentiator for a class of signals satisfying a linear differential equation with unknown parameters. A generalized modulating functions method is proposed first to estimate the unknown parameters, then to derive accurate integral formulae for the left-sided Riemann-Liouville fractional derivatives of the studied signal. Unlike the improper integral in the definition of the left-sided Riemann-Liouville fractional derivative, the integrals in the proposed formulae can be proper and be considered as a low-pass filter by choosing appropriate modulating functions. Hence, digital fractional order differentiators applicable for on-line applications are deduced using a numerical integration method in discrete noisy case. Moreover, some error analysis are given for noise error contributions due to a class of stochastic processes. Finally, numerical examples are given to show the accuracy and robustness of the proposed fractional order differentiators.

  3. Robust fractional order differentiators using generalized modulating functions method

    KAUST Repository

    Liu, Dayan

    2015-02-01

    This paper aims at designing a fractional order differentiator for a class of signals satisfying a linear differential equation with unknown parameters. A generalized modulating functions method is proposed first to estimate the unknown parameters, then to derive accurate integral formulae for the left-sided Riemann-Liouville fractional derivatives of the studied signal. Unlike the improper integral in the definition of the left-sided Riemann-Liouville fractional derivative, the integrals in the proposed formulae can be proper and be considered as a low-pass filter by choosing appropriate modulating functions. Hence, digital fractional order differentiators applicable for on-line applications are deduced using a numerical integration method in discrete noisy case. Moreover, some error analysis are given for noise error contributions due to a class of stochastic processes. Finally, numerical examples are given to show the accuracy and robustness of the proposed fractional order differentiators.

  4. Reliability analysis of reactor systems by applying probability method; Analiza pouzdanosti reaktorskih sistema primenom metoda verovatnoce

    Energy Technology Data Exchange (ETDEWEB)

    Milivojevic, S [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1974-12-15

    Probability method was chosen for analysing the reactor system reliability is considered realistic since it is based on verified experimental data. In fact this is a statistical method. The probability method developed takes into account the probability distribution of permitted levels of relevant parameters and their particular influence on the reliability of the system as a whole. The proposed method is rather general, and was used for problem of thermal safety analysis of reactor system. This analysis enables to analyze basic properties of the system under different operation conditions, expressed in form of probability they show the reliability of the system on the whole as well as reliability of each component.

  5. Improved Multilevel Fast Multipole Method for Higher-Order discretizations

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Meincke, Peter; Jorgensen, Erik

    2014-01-01

    The Multilevel Fast Multipole Method (MLFMM) allows for a reduced computational complexity when solving electromagnetic scattering problems. Combining this with the reduced number of unknowns provided by Higher-Order discretizations has proven to be a difficult task, with the general conclusion b...

  6. Reliability Evaluation of Bridges Based on Nonprobabilistic Response Surface Limit Method

    Directory of Open Access Journals (Sweden)

    Xuyong Chen

    2017-01-01

    Full Text Available Due to many uncertainties in nonprobabilistic reliability assessment of bridges, the limit state function is generally unknown. The traditional nonprobabilistic response surface method is a lengthy and oscillating iteration process and leads to difficultly solving the nonprobabilistic reliability index. This article proposes a nonprobabilistic response surface limit method based on the interval model. The intention of this method is to solve the upper and lower limits of the nonprobabilistic reliability index and to narrow the range of the nonprobabilistic reliability index. If the range of the reliability index reduces to an acceptable accuracy, the solution will be considered convergent, and the nonprobabilistic reliability index will be obtained. The case study indicates that using the proposed method can avoid oscillating iteration process, make iteration process stable and convergent, reduce iteration steps significantly, and improve computational efficiency and precision significantly compared with the traditional nonprobabilistic response surface method. Finally, the nonprobabilistic reliability evaluation process of bridge will be built through evaluating the reliability of one PC continuous rigid frame bridge with three spans using the proposed method, which appears to be more simple and reliable when lack of samples and parameters in the bridge nonprobabilistic reliability evaluation is present.

  7. Reliability Verification of DBE Environment Simulation Test Facility by using Statistics Method

    International Nuclear Information System (INIS)

    Jang, Kyung Nam; Kim, Jong Soeg; Jeong, Sun Chul; Kyung Heum

    2011-01-01

    In the nuclear power plant, all the safety-related equipment including cables under the harsh environment should perform the equipment qualification (EQ) according to the IEEE std 323. There are three types of qualification methods including type testing, operating experience and analysis. In order to environmentally qualify the safety-related equipment using type testing method, not analysis or operation experience method, the representative sample of equipment, including interfaces, should be subjected to a series of tests. Among these tests, Design Basis Events (DBE) environment simulating test is the most important test. DBE simulation test is performed in DBE simulation test chamber according to the postulated DBE conditions including specified high-energy line break (HELB), loss of coolant accident (LOCA), main steam line break (MSLB) and etc, after thermal and radiation aging. Because most DBE conditions have 100% humidity condition, in order to trace temperature and pressure of DBE condition, high temperature steam should be used. During DBE simulation test, if high temperature steam under high pressure inject to the DBE test chamber, the temperature and pressure in test chamber rapidly increase over the target temperature. Therefore, the temperature and pressure in test chamber continue fluctuating during the DBE simulation test to meet target temperature and pressure. We should ensure fairness and accuracy of test result by confirming the performance of DBE environment simulation test facility. In this paper, in order to verify reliability of DBE environment simulation test facility, statistics method is used

  8. Benchmarking with high-order nodal diffusion methods

    International Nuclear Information System (INIS)

    Tomasevic, D.; Larsen, E.W.

    1993-01-01

    Significant progress in the solution of multidimensional neutron diffusion problems was made in the late 1970s with the introduction of nodal methods. Modern nodal reactor analysis codes provide significant improvements in both accuracy and computing speed over earlier codes based on fine-mesh finite difference methods. In the past, the performance of advanced nodal methods was determined by comparisons with fine-mesh finite difference codes. More recently, the excellent spatial convergence of nodal methods has permitted their use in establishing reference solutions for some important bench-mark problems. The recent development of the self-consistent high-order nodal diffusion method and its subsequent variational formulation has permitted the calculation of reference solutions with one node per assembly mesh size. In this paper, we compare results for four selected benchmark problems to those obtained by high-order response matrix methods and by two well-known state-of-the-art nodal methods (the open-quotes analyticalclose quotes and open-quotes nodal expansionclose quotes methods)

  9. Optimized low-order explicit Runge-Kutta schemes for high- order spectral difference method

    KAUST Repository

    Parsani, Matteo

    2012-01-01

    Optimal explicit Runge-Kutta (ERK) schemes with large stable step sizes are developed for method-of-lines discretizations based on the spectral difference (SD) spatial discretization on quadrilateral grids. These methods involve many stages and provide the optimal linearly stable time step for a prescribed SD spectrum and the minimum leading truncation error coefficient, while admitting a low-storage implementation. Using a large number of stages, the new ERK schemes lead to efficiency improvements larger than 60% over standard ERK schemes for 4th- and 5th-order spatial discretization.

  10. Higher order methods for burnup calculations with Bateman solutions

    International Nuclear Information System (INIS)

    Isotalo, A.E.; Aarnio, P.A.

    2011-01-01

    Highlights: → Average microscopic reaction rates need to be estimated at each step. → Traditional predictor-corrector methods use zeroth and first order predictions. → Increasing predictor order greatly improves results. → Increasing corrector order does not improve results. - Abstract: A group of methods for burnup calculations solves the changes in material compositions by evaluating an explicit solution to the Bateman equations with constant microscopic reaction rates. This requires predicting representative averages for the one-group cross-sections and flux during each step, which is usually done using zeroth and first order predictions for their time development in a predictor-corrector calculation. In this paper we present the results of using linear, rather than constant, extrapolation on the predictor and quadratic, rather than linear, interpolation on the corrector. Both of these are done by using data from the previous step, and thus do not affect the stepwise running time. The methods were tested by implementing them into the reactor physics code Serpent and comparing the results from four test cases to accurate reference results obtained with very short steps. Linear extrapolation greatly improved results for thermal spectra and should be preferred over the constant one currently used in all Bateman solution based burnup calculations. The effects of using quadratic interpolation on the corrector were, on the other hand, predominantly negative, although not enough so to conclusively decide between the linear and quadratic variants.

  11. Assessment and Improving Methods of Reliability Indices in Bakhtar Regional Electricity Company

    Directory of Open Access Journals (Sweden)

    Saeed Shahrezaei

    2013-04-01

    Full Text Available Reliability of a system is the ability of a system to do prospected duties in future and the probability of desirable operation for doing predetermined duties. Power system elements failures data are the main data of reliability assessment in the network. Determining antiseptic parameters is the goal of reliability assessment by using system history data. These parameters help to recognize week points of the system. In other words, the goal of reliability assessment is operation improving and decreasing of the failures and power outages. This paper is developed to assess reliability indices of Bakhtar Regional Electricity Company up to 1393 and the improving methods and their effects on the reliability indices in this network. DIgSILENT Power Factory software is employed for simulation. Simulation results show the positive effect of improving methods in reliability indices of Bakhtar Regional Electricity Company.

  12. Reliability studies of diagnostic methods in Indian traditional Ayurveda medicine: An overview

    Science.gov (United States)

    Kurande, Vrinda Hitendra; Waagepetersen, Rasmus; Toft, Egon; Prasad, Ramjee

    2013-01-01

    Recently, a need to develop supportive new scientific evidence for contemporary Ayurveda has emerged. One of the research objectives is an assessment of the reliability of diagnoses and treatment. Reliability is a quantitative measure of consistency. It is a crucial issue in classification (such as prakriti classification), method development (pulse diagnosis), quality assurance for diagnosis and treatment and in the conduct of clinical studies. Several reliability studies are conducted in western medicine. The investigation of the reliability of traditional Chinese, Japanese and Sasang medicine diagnoses is in the formative stage. However, reliability studies in Ayurveda are in the preliminary stage. In this paper, examples are provided to illustrate relevant concepts of reliability studies of diagnostic methods and their implication in practice, education, and training. An introduction to reliability estimates and different study designs and statistical analysis is given for future studies in Ayurveda. PMID:23930037

  13. Identification of fractional order systems using modulating functions method

    KAUST Repository

    Liu, Dayan

    2013-06-01

    The modulating functions method has been used for the identification of linear and nonlinear systems. In this paper, we generalize this method to the on-line identification of fractional order systems based on the Riemann-Liouville fractional derivatives. First, a new fractional integration by parts formula involving the fractional derivative of a modulating function is given. Then, we apply this formula to a fractional order system, for which the fractional derivatives of the input and the output can be transferred into the ones of the modulating functions. By choosing a set of modulating functions, a linear system of algebraic equations is obtained. Hence, the unknown parameters of a fractional order system can be estimated by solving a linear system. Using this method, we do not need any initial values which are usually unknown and not equal to zero. Also we do not need to estimate the fractional derivatives of noisy output. Moreover, it is shown that the proposed estimators are robust against high frequency sinusoidal noises and the ones due to a class of stochastic processes. Finally, the efficiency and the stability of the proposed method is confirmed by some numerical simulations.

  14. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  15. On the analysis of glow curves with the general order kinetics: Reliability of the computed trap parameters

    Energy Technology Data Exchange (ETDEWEB)

    Ortega, F. [Facultad de Ingeniería (UNCPBA) and CIFICEN (UNCPBA – CICPBA – CONICET), Av. del Valle 5737, 7400 Olavarría (Argentina); Santiago, M.; Martinez, N.; Marcazzó, J.; Molina, P.; Caselli, E. [Instituto de Física Arroyo Seco (UNCPBA) and CIFICEN (UNCPBA – CICPBA – CONICET), Pinto 399, 7000 Tandil (Argentina)

    2017-04-15

    Nowadays the most employed kinetics for analyzing glow curves is the general order kinetics (GO) proposed by C. E. May and J. A. Partridge. As shown in many articles this kinetics might yield wrong parameters characterizing trap and recombination centers. In this article this kinetics is compared with the modified general order kinetics put forward by M. S. Rasheedy by analyzing synthetic glow curves. The results show that the modified kinetics gives parameters, which are more accurate than that yield by the original general order kinetics. A criterion is reported to evaluate the accuracy of the trap parameters found by deconvolving glow curves. This criterion was employed to assess the reliability of the trap parameters of the YVO{sub 4}: Eu{sup 3+} compounds.

  16. RELIABILITY ASSESSMENT OF ENTROPY METHOD FOR SYSTEM CONSISTED OF IDENTICAL EXPONENTIAL UNITS

    Institute of Scientific and Technical Information of China (English)

    Sun Youchao; Shi Jun

    2004-01-01

    The reliability assessment of unit-system near two levels is the most important content in the reliability multi-level synthesis of complex systems. Introducing the information theory into system reliability assessment, using the addible characteristic of information quantity and the principle of equivalence of information quantity, an entropy method of data information conversion is presented for the system consisted of identical exponential units. The basic conversion formulae of entropy method of unit test data are derived based on the principle of information quantity equivalence. The general models of entropy method synthesis assessment for system reliability approximate lower limits are established according to the fundamental principle of the unit reliability assessment. The applications of the entropy method are discussed by way of practical examples. Compared with the traditional methods, the entropy method is found to be valid and practicable and the assessment results are very satisfactory.

  17. The Reliability, Impact, and Cost-Effectiveness of Value-Added Teacher Assessment Methods

    Science.gov (United States)

    Yeh, Stuart S.

    2012-01-01

    This article reviews evidence regarding the intertemporal reliability of teacher rankings based on value-added methods. Value-added methods exhibit low reliability, yet are broadly supported by prominent educational researchers and are increasingly being used to evaluate and fire teachers. The article then presents a cost-effectiveness analysis…

  18. Reliability testing of tendon disease using two different scanning methods in patients with rheumatoid arthritis

    DEFF Research Database (Denmark)

    Bruyn, George A W; Möller, Ingrid; Garrido, Jesus

    2012-01-01

    To assess the intra- and interobserver reliability of musculoskeletal ultrasonography (US) in detecting inflammatory and destructive tendon abnormalities in patients with RA using two different scanning methods.......To assess the intra- and interobserver reliability of musculoskeletal ultrasonography (US) in detecting inflammatory and destructive tendon abnormalities in patients with RA using two different scanning methods....

  19. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    Science.gov (United States)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  20. A comparative study on the HW reliability assessment methods for digital I and C equipment

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Hoan Sung; Sung, T. Y.; Eom, H. S.; Park, J. K.; Kang, H. G.; Lee, G. Y. [Korea Atomic Energy Research Institute, Taejeon (Korea); Kim, M. C. [Korea Advanced Institute of Science and Technology, Taejeon (Korea); Jun, S. T. [KHNP, Taejeon (Korea)

    2002-03-01

    It is necessary to predict or to evaluate the reliability of electronic equipment for the probabilistic safety analysis of digital instrument and control equipment. But most databases for the reliability prediction have no data for the up-to-date equipment and the failure modes are not classified. The prediction results for the specific component show different values according to the methods and databases. For boards and systems each method shows different values than others also. This study is for reliability prediction of PDC system for Wolsong NPP1 as a digital I and C equipment. Various reliability prediction methods and failure databases are used in calculation of the reliability to compare the effects of sensitivity and accuracy of each model and database. Many considerations for the reliability assessment of digital systems are derived with the results of this study. 14 refs., 19 figs., 15 tabs. (Author)

  1. The higher order flux mapping method in large size PHWRs

    International Nuclear Information System (INIS)

    Kulkarni, A.K.; Balaraman, V.; Purandare, H.D.

    1997-01-01

    A new higher order method is proposed for obtaining flux map using single set of expansion mode. In this procedure, one can make use of the difference between predicted value of detector reading and their actual values for determining the strength of local fluxes around detector site. The local fluxes are arising due to constant perturbation changes (both extrinsic and intrinsic) taking place in the reactor. (author)

  2. Wavelet Methods for Solving Fractional Order Differential Equations

    OpenAIRE

    A. K. Gupta; S. Saha Ray

    2014-01-01

    Fractional calculus is a field of applied mathematics which deals with derivatives and integrals of arbitrary orders. The fractional calculus has gained considerable importance during the past decades mainly due to its application in diverse fields of science and engineering such as viscoelasticity, diffusion of biological population, signal processing, electromagnetism, fluid mechanics, electrochemistry, and many more. In this paper, we review different wavelet methods for solving both linea...

  3. International Conference on Spectral and High-Order Methods

    CERN Document Server

    Dumont, Ney; Hesthaven, Jan

    2017-01-01

    This book features a selection of high-quality papers chosen from the best presentations at the International Conference on Spectral and High-Order Methods (2016), offering an overview of the depth and breadth of the activities within this important research area. The carefully reviewed papers provide a snapshot of the state of the art, while the extensive bibliography helps initiate new research directions.

  4. Method of reliability allocation based on fault tree analysis and fuzzy math in nuclear power plants

    International Nuclear Information System (INIS)

    Chen Zhaobing; Deng Jian; Cao Xuewu

    2005-01-01

    Reliability allocation is a kind of a difficult multi-objective optimization problem. It can not only be applied to determine the reliability characteristic of reactor systems, subsystem and main components but also be performed to improve the design, operation and maintenance of nuclear plants. The fuzzy math known as one of the powerful tools for fuzzy optimization and the fault analysis deemed to be one of the effective methods of reliability analysis can be applied to the reliability allocation model so as to work out the problems of fuzzy characteristic of some factors and subsystem's choice respectively in this paper. Thus we develop a failure rate allocation model on the basis of the fault tree analysis and fuzzy math. For the choice of the reliability constraint factors, we choose the six important ones according to practical need for conducting the reliability allocation. The subsystem selected by the top-level fault tree analysis is to avoid allocating reliability for all the equipment and components including the unnecessary parts. During the reliability process, some factors can be calculated or measured quantitatively while others only can be assessed qualitatively by the expert rating method. So we adopt fuzzy decision and dualistic contrast to realize the reliability allocation with the help of fault tree analysis. Finally the example of the emergency diesel generator's reliability allocation is used to illustrate reliability allocation model and improve this model simple and applicable. (authors)

  5. Detecting Violations of Unidimensionality by Order-Restricted Inference Methods

    Directory of Open Access Journals (Sweden)

    Moritz eHeene

    2016-03-01

    Full Text Available The assumption of unidimensionality and quantitative measurement represents one of the key concepts underlying most of the commonly applied of item response models. The assumption of unidimensionality is frequently tested although most commonly applied methods have been shown having low power against violations of unidimensionality whereas the assumption of quantitative measurement remains in most of the cases only an (implicit assumption. On the basis of a simulation study it is shown that order restricted inference methods within a Markov Chain Monte Carlo framework can successfully be used to test both assumptions.

  6. Calibration Methods for Reliability-Based Design Codes

    DEFF Research Database (Denmark)

    Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard

    2004-01-01

    The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...

  7. Rapid and Reliable HPLC Method for the Determination of Vitamin ...

    African Journals Online (AJOL)

    Purpose: To develop and validate an accurate, sensitive and reproducible high performance liquid chromatographic (HPLC) method for the quantitation of vitamin C in pharmaceutical samples. Method: The drug and the standard were eluted from Superspher RP-18 (250 mm x 4.6 mm, 10ìm particle size) at 20 0C.

  8. Evaluation and reliability of bone histological age estimation methods

    African Journals Online (AJOL)

    Human age estimation at death plays a vital role in forensic anthropology and bioarchaeology. Researchers used morphological and histological methods to estimate human age from their skeletal remains. This paper discussed different histological methods that used human long bones and ribs to determine age ...

  9. Method matters: Understanding diagnostic reliability in DSM-IV and DSM-5.

    Science.gov (United States)

    Chmielewski, Michael; Clark, Lee Anna; Bagby, R Michael; Watson, David

    2015-08-01

    Diagnostic reliability is essential for the science and practice of psychology, in part because reliability is necessary for validity. Recently, the DSM-5 field trials documented lower diagnostic reliability than past field trials and the general research literature, resulting in substantial criticism of the DSM-5 diagnostic criteria. Rather than indicating specific problems with DSM-5, however, the field trials may have revealed long-standing diagnostic issues that have been hidden due to a reliance on audio/video recordings for estimating reliability. We estimated the reliability of DSM-IV diagnoses using both the standard audio-recording method and the test-retest method used in the DSM-5 field trials, in which different clinicians conduct separate interviews. Psychiatric patients (N = 339) were diagnosed using the SCID-I/P; 218 were diagnosed a second time by an independent interviewer. Diagnostic reliability using the audio-recording method (N = 49) was "good" to "excellent" (M κ = .80) and comparable to the DSM-IV field trials estimates. Reliability using the test-retest method (N = 218) was "poor" to "fair" (M κ = .47) and similar to DSM-5 field-trials' estimates. Despite low test-retest diagnostic reliability, self-reported symptoms were highly stable. Moreover, there was no association between change in self-report and change in diagnostic status. These results demonstrate the influence of method on estimates of diagnostic reliability. (c) 2015 APA, all rights reserved).

  10. Methods for reliability evaluation of trust and reputation systems

    Science.gov (United States)

    Janiszewski, Marek B.

    2016-09-01

    Trust and reputation systems are a systematic approach to build security on the basis of observations of node's behaviour. Exchange of node's opinions about other nodes is very useful to indicate nodes which act selfishly or maliciously. The idea behind trust and reputation systems gets significance because of the fact that conventional security measures (based on cryptography) are often not sufficient. Trust and reputation systems can be used in various types of networks such as WSN, MANET, P2P and also in e-commerce applications. Trust and reputation systems give not only benefits but also could be a thread itself. Many attacks aim at trust and reputation systems exist, but such attacks still have not gain enough attention of research teams. Moreover, joint effects of many of known attacks have been determined as a very interesting field of research. Lack of an acknowledged methodology of evaluation of trust and reputation systems is a serious problem. This paper aims at presenting various approaches of evaluation such systems. This work also contains a description of generalization of many trust and reputation systems which can be used to evaluate reliability of such systems in the context of preventing various attacks.

  11. Research on Control Method Based on Real-Time Operational Reliability Evaluation for Space Manipulator

    Directory of Open Access Journals (Sweden)

    Yifan Wang

    2014-05-01

    Full Text Available A control method based on real-time operational reliability evaluation for space manipulator is presented for improving the success rate of a manipulator during the execution of a task. In this paper, a method for quantitative analysis of operational reliability is given when manipulator is executing a specified task; then a control model which could control the quantitative operational reliability is built. First, the control process is described by using a state space equation. Second, process parameters are estimated in real time using Bayesian method. Third, the expression of the system's real-time operational reliability is deduced based on the state space equation and process parameters which are estimated using Bayesian method. Finally, a control variable regulation strategy which considers the cost of control is given based on the Theory of Statistical Process Control. It is shown via simulations that this method effectively improves the operational reliability of space manipulator control system.

  12. Reliability research to nuclear power plant operators based on several methods

    International Nuclear Information System (INIS)

    Fang Xiang; Li Fu; Zhao Bingquan

    2009-01-01

    The paper utilizes many kinds of international reliability research methods, and summarizes the review of reliability research of Chinese nuclear power plant operators in past over ten years based on the simulator platform of nuclear power plant. The paper shows the necessity and feasibility of the research to nuclear power plant operators from many angles including human cognition reliability, fuzzy mathematics model and psychological research model, etc. It will be good to the safe operation of nuclear power plant based on many kinds of research methods to the reliability research of nuclear power plant operators. (authors)

  13. Comparison of Methods for Dependency Determination between Human Failure Events within Human Reliability Analysis

    International Nuclear Information System (INIS)

    Cepin, M.

    2008-01-01

    The human reliability analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jozef Stefan human reliability analysis (IJS-HRA) and standardized plant analysis risk human reliability analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance

  14. Comparison of methods for dependency determination between human failure events within human reliability analysis

    International Nuclear Information System (INIS)

    Cepis, M.

    2007-01-01

    The Human Reliability Analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease of subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jozef Stefan - Human Reliability Analysis (IJS-HRA) and Standardized Plant Analysis Risk Human Reliability Analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance. (author)

  15. RCS Leak Rate Calculation with High Order Least Squares Method

    International Nuclear Information System (INIS)

    Lee, Jeong Hun; Kang, Young Kyu; Kim, Yang Ki

    2010-01-01

    As a part of action items for Application of Leak before Break(LBB), RCS Leak Rate Calculation Program is upgraded in Kori unit 3 and 4. For real time monitoring of operators, periodic calculation is needed and corresponding noise reduction scheme is used. This kind of study was issued in Korea, so there have upgraded and used real time RCS Leak Rate Calculation Program in UCN unit 3 and 4 and YGN unit 1 and 2. For reduction of the noise in signals, Linear Regression Method was used in those programs. Linear Regression Method is powerful method for noise reduction. But the system is not static with some alternative flow paths and this makes mixed trend patterns of input signal values. In this condition, the trend of signal and average of Linear Regression are not entirely same pattern. In this study, high order Least squares Method is used to follow the trend of signal and the order of calculation is rearranged. The result of calculation makes reasonable trend and the procedure is physically consistence

  16. [A reliability growth assessment method and its application in the development of equipment in space cabin].

    Science.gov (United States)

    Chen, J D; Sun, H L

    1999-04-01

    Objective. To assess and predict reliability of an equipment dynamically by making full use of various test informations in the development of products. Method. A new reliability growth assessment method based on army material system analysis activity (AMSAA) model was developed. The method is composed of the AMSAA model and test data conversion technology. Result. The assessment and prediction results of a space-borne equipment conform to its expectations. Conclusion. It is suggested that this method should be further researched and popularized.

  17. Optimal explicit strong stability preserving Runge–Kutta methods with high linear order and optimal nonlinear order

    KAUST Repository

    Gottlieb, Sigal

    2015-04-10

    High order spatial discretizations with monotonicity properties are often desirable for the solution of hyperbolic PDEs. These methods can advantageously be coupled with high order strong stability preserving time discretizations. The search for high order strong stability time-stepping methods with large allowable strong stability coefficient has been an active area of research over the last two decades. This research has shown that explicit SSP Runge-Kutta methods exist only up to fourth order. However, if we restrict ourselves to solving only linear autonomous problems, the order conditions simplify and this order barrier is lifted: explicit SSP Runge-Kutta methods of any linear order exist. These methods reduce to second order when applied to nonlinear problems. In the current work we aim to find explicit SSP Runge-Kutta methods with large allowable time-step, that feature high linear order and simultaneously have the optimal fourth order nonlinear order. These methods have strong stability coefficients that approach those of the linear methods as the number of stages and the linear order is increased. This work shows that when a high linear order method is desired, it may still be worthwhile to use methods with higher nonlinear order.

  18. Reliable method for fission source convergence of Monte Carlo criticality calculation with Wielandt's method

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro; Miyoshi, Yoshinori

    2004-01-01

    A new algorithm of Monte Carlo criticality calculations for implementing Wielandt's method, which is one of acceleration techniques for deterministic source iteration methods, is developed, and the algorithm can be successfully implemented into MCNP code. In this algorithm, part of fission neutrons emitted during random walk processes are tracked within the current cycle, and thus a fission source distribution used in the next cycle spread more widely. Applying this method intensifies a neutron interaction effect even in a loosely-coupled array where conventional Monte Carlo criticality methods have difficulties, and a converged fission source distribution can be obtained with fewer cycles. Computing time spent for one cycle, however, increases because of tracking fission neutrons within the current cycle, which eventually results in an increase of total computing time up to convergence. In addition, statistical fluctuations of a fission source distribution in a cycle are worsened by applying Wielandt's method to Monte Carlo criticality calculations. However, since a fission source convergence is attained with fewer source iterations, a reliable determination of convergence can easily be made even in a system with a slow convergence. This acceleration method is expected to contribute to prevention of incorrect Monte Carlo criticality calculations. (author)

  19. A New Method of Reliability Evaluation Based on Wavelet Information Entropy for Equipment Condition Identification

    International Nuclear Information System (INIS)

    He, Z J; Zhang, X L; Chen, X F

    2012-01-01

    Aiming at reliability evaluation of condition identification of mechanical equipment, it is necessary to analyze condition monitoring information. A new method of reliability evaluation based on wavelet information entropy extracted from vibration signals of mechanical equipment is proposed. The method is quite different from traditional reliability evaluation models that are dependent on probability statistics analysis of large number sample data. The vibration signals of mechanical equipment were analyzed by means of second generation wavelet package (SGWP). We take relative energy in each frequency band of decomposed signal that equals a percentage of the whole signal energy as probability. Normalized information entropy (IE) is obtained based on the relative energy to describe uncertainty of a system instead of probability. The reliability degree is transformed by the normalized wavelet information entropy. A successful application has been achieved to evaluate the assembled quality reliability for a kind of dismountable disk-drum aero-engine. The reliability degree indicates the assembled quality satisfactorily.

  20. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  1. Assessment of modern methods of human factor reliability analysis in PSA studies

    International Nuclear Information System (INIS)

    Holy, J.

    2001-12-01

    The report is structured as follows: Classical terms and objects (Probabilistic safety assessment as a framework for human reliability assessment; Human failure within the PSA model; Basic types of operator failure modelled in a PSA study and analyzed by HRA methods; Qualitative analysis of human reliability; Quantitative analysis of human reliability used; Process of analysis of nuclear reactor operator reliability in a PSA study); New terms and objects (Analysis of dependences; Errors of omission; Errors of commission; Error forcing context); and Overview and brief assessment of human reliability analysis (Basic characteristics of the methods; Assets and drawbacks of the use of each of HRA method; History and prospects of the use of the methods). (P.A.)

  2. Study of Fuze Structure and Reliability Design Based on the Direct Search Method

    Science.gov (United States)

    Lin, Zhang; Ning, Wang

    2017-03-01

    Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.

  3. Application of reliability analysis methods to the comparison of two safety circuits

    International Nuclear Information System (INIS)

    Signoret, J.-P.

    1975-01-01

    Two circuits of different design, intended for assuming the ''Low Pressure Safety Injection'' function in PWR reactors are analyzed using reliability methods. The reliability analysis of these circuits allows the failure trees to be established and the failure probability derived. The dependence of these results on test use and maintenance is emphasized as well as critical paths. The great number of results obtained may allow a well-informed choice taking account of the reliability wanted for the type of circuits [fr

  4. The Language Teaching Methods Scale: Reliability and Validity Studies

    Science.gov (United States)

    Okmen, Burcu; Kilic, Abdurrahman

    2016-01-01

    The aim of this research is to develop a scale to determine the language teaching methods used by English teachers. The research sample consisted of 300 English teachers who taught at Duzce University and in primary schools, secondary schools and high schools in the Provincial Management of National Education in the city of Duzce in 2013-2014…

  5. A method to determine validity and reliability of activity sensors

    NARCIS (Netherlands)

    Boerema, Simone Theresa; Hermens, Hermanus J.

    2013-01-01

    METHOD Four sensors were securely fastened to a mechanical oscillator (Vibration Exciter, type 4809, Brüel & Kjær) and moved at various frequencies (6.67Hz; 13.45Hz; 19.88Hz) within the range of human physical activity. For each of the three sensor axes, the sensors were simultaneously moved for

  6. Reliability and Validity of the Research Methods Skills Assessment

    Science.gov (United States)

    Smith, Tamarah; Smith, Samantha

    2018-01-01

    The Research Methods Skills Assessment (RMSA) was created to measure psychology majors' statistics knowledge and skills. The American Psychological Association's Guidelines for the Undergraduate Major in Psychology (APA, 2007, 2013) served as a framework for development. Results from a Rasch analysis with data from n = 330 undergraduates showed…

  7. A survey on the human reliability analysis methods for the design of Korean next generation reactor

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Hee; Lee, J. W.; Park, J. C.; Kwack, H. Y.; Lee, K. Y.; Park, J. K.; Kim, I. S.; Jung, K. W

    2000-03-01

    Enhanced features through applying recent domestic technologies may characterize the safety and efficiency of KNGR(Korea Next Generation Reactor). Human engineered interface and control room environment are expected to be beneficial to the human aspects of KNGR design. However, since the current method for human reliability analysis is not up to date after THERP/SHARP, it becomes hard to assess the potential of human errors due to both of the positive and negative effect of the design changes in KNGR. This is a state of the art report on the human reliability analysis methods that are potentially available for the application to the KNGR design. We surveyed every technical aspects of existing HRA methods, and compared them in order to obtain the requirements for the assessment of human error potentials within KNGR design. We categorized the more than 10 methods into the first and the second generation according to the suggestion of Dr. Hollnagel. THERP was revisited in detail. ATHEANA proposed by US NRC for an advanced design and CREAM proposed by Dr. Hollnagel were reviewed and compared. We conclude that the key requirements might include the enhancement in the early steps for human error identification and the quantification steps with considerations of more extended error shaping factors over PSFs(performance shaping factors). The utilization of the steps and approaches of ATHEANA and CREAM will be beneficial to the attainment of an appropriate HRA method for KNGR. However, the steps and data from THERP will be still maintained because of the continuity with previous PSA activities in KNGR design.

  8. A survey on the human reliability analysis methods for the design of Korean next generation reactor

    International Nuclear Information System (INIS)

    Lee, Yong Hee; Lee, J. W.; Park, J. C.; Kwack, H. Y.; Lee, K. Y.; Park, J. K.; Kim, I. S.; Jung, K. W.

    2000-03-01

    Enhanced features through applying recent domestic technologies may characterize the safety and efficiency of KNGR(Korea Next Generation Reactor). Human engineered interface and control room environment are expected to be beneficial to the human aspects of KNGR design. However, since the current method for human reliability analysis is not up to date after THERP/SHARP, it becomes hard to assess the potential of human errors due to both of the positive and negative effect of the design changes in KNGR. This is a state of the art report on the human reliability analysis methods that are potentially available for the application to the KNGR design. We surveyed every technical aspects of existing HRA methods, and compared them in order to obtain the requirements for the assessment of human error potentials within KNGR design. We categorized the more than 10 methods into the first and the second generation according to the suggestion of Dr. Hollnagel. THERP was revisited in detail. ATHEANA proposed by US NRC for an advanced design and CREAM proposed by Dr. Hollnagel were reviewed and compared. We conclude that the key requirements might include the enhancement in the early steps for human error identification and the quantification steps with considerations of more extended error shaping factors over PSFs(performance shaping factors). The utilization of the steps and approaches of ATHEANA and CREAM will be beneficial to the attainment of an appropriate HRA method for KNGR. However, the steps and data from THERP will be still maintained because of the continuity with previous PSA activities in KNGR design

  9. A general first-order global sensitivity analysis method

    International Nuclear Information System (INIS)

    Xu Chonggang; Gertner, George Zdzislaw

    2008-01-01

    Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters

  10. Reliability and validity of non-radiographic methods of thoracic kyphosis measurement: a systematic review.

    Science.gov (United States)

    Barrett, Eva; McCreesh, Karen; Lewis, Jeremy

    2014-02-01

    A wide array of instruments are available for non-invasive thoracic kyphosis measurement. Guidelines for selecting outcome measures for use in clinical and research practice recommend that properties such as validity and reliability are considered. This systematic review reports on the reliability and validity of non-invasive methods for measuring thoracic kyphosis. A systematic search of 11 electronic databases located studies assessing reliability and/or validity of non-invasive thoracic kyphosis measurement techniques. Two independent reviewers used a critical appraisal tool to assess the quality of retrieved studies. Data was extracted by the primary reviewer. The results were synthesized qualitatively using a level of evidence approach. 27 studies satisfied the eligibility criteria and were included in the review. The reliability, validity and both reliability and validity were investigated by sixteen, two and nine studies respectively. 17/27 studies were deemed to be of high quality. In total, 15 methods of thoracic kyphosis were evaluated in retrieved studies. All investigated methods showed high (ICC ≥ .7) to very high (ICC ≥ .9) levels of reliability. The validity of the methods ranged from low to very high. The strongest levels of evidence for reliability exists in support of the Debrunner kyphometer, Spinal Mouse and Flexicurve index, and for validity supports the arcometer and Flexicurve index. Further reliability and validity studies are required to strengthen the level of evidence for the remaining methods of measurement. This should be addressed by future research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Radioisotope method potentialities in machine reliability and durability enhancement

    International Nuclear Information System (INIS)

    Postnikov, V.I.

    1975-01-01

    The development of a surface activation method is reviewed with regard to wear of machine parts. Examples demonstrating the highly promising aspects and practical application of the method are cited. The use of high-sensitivity instruments and variation of activation depth from 10 um to 0.5 mm allows to perform the investigations at a sensitivity of 0.05 um and to estimate the linear values of machine wear. Standard diagrams are presented for measuring the wear of different machine parts by means of surface activation. Investigations performed at several Soviet technological institutes afford a set of dependences, which characterize the distribution of radioactive isotopes in depth under different conditions of activation of diverse metals and alloys and permit to study the wear of any metal

  12. TESTING METHODS FOR MECHANICALLY IMPROVED SOILS: RELIABILITY AND VALIDITY

    Directory of Open Access Journals (Sweden)

    Ana Petkovšek

    2017-10-01

    Full Text Available A possibility of in-situ mechanical improvement for reducing the liquefaction potential of silty sands was investigated by using three different techniques: Vibratory Roller Compaction, Rapid Impact Compaction (RIC and Soil Mixing. Material properties at all test sites were investigated before and after improvement with the laboratory and the in situ tests (CPT, SDMT, DPSH B, static and dynamic load plate test, geohydraulic tests. Correlation between the results obtained by different test methods gave inconclusive answers.

  13. How to use an optimization-based method capable of balancing safety, reliability, and weight in an aircraft design process

    Energy Technology Data Exchange (ETDEWEB)

    Johansson, Cristina [Mendeley, Broderna Ugglasgatan, Linkoping (Sweden); Derelov, Micael; Olvander, Johan [Linkoping University, IEI, Dept. of Machine Design, Linkoping (Sweden)

    2017-03-15

    In order to help decision-makers in the early design phase to improve and make more cost-efficient system safety and reliability baselines of aircraft design concepts, a method (Multi-objective Optimization for Safety and Reliability Trade-off) that is able to handle trade-offs such as system safety, system reliability, and other characteristics, for instance weight and cost, is used. Multi-objective Optimization for Safety and Reliability Trade-off has been developed and implemented at SAAB Aeronautics. The aim of this paper is to demonstrate how the implemented method might work to aid the selection of optimal design alternatives. The method is a three-step method: step 1 involves the modelling of each considered target, step 2 is optimization, and step 3 is the visualization and selection of results (results processing). The analysis is performed within Architecture Design and Preliminary Design steps, according to the company's Product Development Process. The lessons learned regarding the use of the implemented trade-off method in the three cases are presented. The results are a handful of solutions, a basis to aid in the selection of a design alternative. While the implementation of the trade-off method is performed for companies, there is nothing to prevent adapting this method, with minimal modifications, for use in other industrial applications.

  14. How to use an optimization-based method capable of balancing safety, reliability, and weight in an aircraft design process

    International Nuclear Information System (INIS)

    Johansson, Cristina; Derelov, Micael; Olvander, Johan

    2017-01-01

    In order to help decision-makers in the early design phase to improve and make more cost-efficient system safety and reliability baselines of aircraft design concepts, a method (Multi-objective Optimization for Safety and Reliability Trade-off) that is able to handle trade-offs such as system safety, system reliability, and other characteristics, for instance weight and cost, is used. Multi-objective Optimization for Safety and Reliability Trade-off has been developed and implemented at SAAB Aeronautics. The aim of this paper is to demonstrate how the implemented method might work to aid the selection of optimal design alternatives. The method is a three-step method: step 1 involves the modelling of each considered target, step 2 is optimization, and step 3 is the visualization and selection of results (results processing). The analysis is performed within Architecture Design and Preliminary Design steps, according to the company's Product Development Process. The lessons learned regarding the use of the implemented trade-off method in the three cases are presented. The results are a handful of solutions, a basis to aid in the selection of a design alternative. While the implementation of the trade-off method is performed for companies, there is nothing to prevent adapting this method, with minimal modifications, for use in other industrial applications

  15. pd Scattering Using a Rigorous Coulomb Treatment: Reliability of the Renormalization Method for Screened-Coulomb Potentials

    International Nuclear Information System (INIS)

    Hiratsuka, Y.; Oryu, S.; Gojuki, S.

    2011-01-01

    Reliability of the screened Coulomb renormalization method, which was proposed in an elegant way by Alt-Sandhas-Zankel-Ziegelmann (ASZZ), is discussed on the basis of 'two-potential theory' for the three-body AGS equations with the Coulomb potential. In order to obtain ASZZ's formula, we define the on-shell Moller function, and calculate it by using the Haeringen criterion, i. e. 'the half-shell Coulomb amplitude is zero'. By these two steps, we can finally obtain the ASZZ formula for a small Coulomb phase shift. Furthermore, the reliability of the Haeringen criterion is thoroughly checked by a numerically rigorous calculation for the Coulomb LS-type equation. We find that the Haeringen criterion can be satisfied only in the higher energy region. We conclude that the ASZZ method can be verified in the case that the on-shell approximation to the Moller function is reasonable, and the Haeringen criterion is reliable. (author)

  16. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  17. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement.

    Science.gov (United States)

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-12-01

    The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis.

  18. Variational Iteration Method for Fifth-Order Boundary Value Problems Using He's Polynomials

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Noor

    2008-01-01

    Full Text Available We apply the variational iteration method using He's polynomials (VIMHP for solving the fifth-order boundary value problems. The proposed method is an elegant combination of variational iteration and the homotopy perturbation methods and is mainly due to Ghorbani (2007. The suggested algorithm is quite efficient and is practically well suited for use in these problems. The proposed iterative scheme finds the solution without any discritization, linearization, or restrictive assumptions. Several examples are given to verify the reliability and efficiency of the method. The fact that the proposed technique solves nonlinear problems without using Adomian's polynomials can be considered as a clear advantage of this algorithm over the decomposition method.

  19. Reliability Evaluation of Bridges Based on Nonprobabilistic Response Surface Limit Method

    OpenAIRE

    Chen, Xuyong; Chen, Qian; Bian, Xiaoya; Fan, Jianping

    2017-01-01

    Due to many uncertainties in nonprobabilistic reliability assessment of bridges, the limit state function is generally unknown. The traditional nonprobabilistic response surface method is a lengthy and oscillating iteration process and leads to difficultly solving the nonprobabilistic reliability index. This article proposes a nonprobabilistic response surface limit method based on the interval model. The intention of this method is to solve the upper and lower limits of the nonprobabilistic ...

  20. Pseudospectral collocation methods for fourth order differential equations

    Science.gov (United States)

    Malek, Alaeddin; Phillips, Timothy N.

    1994-01-01

    Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.

  1. Kernel methods for interpretable machine learning of order parameters

    Science.gov (United States)

    Ponte, Pedro; Melko, Roger G.

    2017-11-01

    Machine learning is capable of discriminating phases of matter, and finding associated phase transitions, directly from large data sets of raw state configurations. In the context of condensed matter physics, most progress in the field of supervised learning has come from employing neural networks as classifiers. Although very powerful, such algorithms suffer from a lack of interpretability, which is usually desired in scientific applications in order to associate learned features with physical phenomena. In this paper, we explore support vector machines (SVMs), which are a class of supervised kernel methods that provide interpretable decision functions. We find that SVMs can learn the mathematical form of physical discriminators, such as order parameters and Hamiltonian constraints, for a set of two-dimensional spin models: the ferromagnetic Ising model, a conserved-order-parameter Ising model, and the Ising gauge theory. The ability of SVMs to provide interpretable classification highlights their potential for automating feature detection in both synthetic and experimental data sets for condensed matter and other many-body systems.

  2. System reliability with correlated components: Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  3. System reliability with correlated components : Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, T.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  4. Automated migration analysis based on cell texture: method & reliability

    Directory of Open Access Journals (Sweden)

    Chittenden Thomas W

    2005-03-01

    Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.

  5. A Reliable Method for Rhythm Analysis during Cardiopulmonary Resuscitation

    Directory of Open Access Journals (Sweden)

    U. Ayala

    2014-01-01

    Full Text Available Interruptions in cardiopulmonary resuscitation (CPR compromise defibrillation success. However, CPR must be interrupted to analyze the rhythm because although current methods for rhythm analysis during CPR have high sensitivity for shockable rhythms, the specificity for nonshockable rhythms is still too low. This paper introduces a new approach to rhythm analysis during CPR that combines two strategies: a state-of-the-art CPR artifact suppression filter and a shock advice algorithm (SAA designed to optimally classify the filtered signal. Emphasis is on designing an algorithm with high specificity. The SAA includes a detector for low electrical activity rhythms to increase the specificity, and a shock/no-shock decision algorithm based on a support vector machine classifier using slope and frequency features. For this study, 1185 shockable and 6482 nonshockable 9-s segments corrupted by CPR artifacts were obtained from 247 patients suffering out-of-hospital cardiac arrest. The segments were split into a training and a test set. For the test set, the sensitivity and specificity for rhythm analysis during CPR were 91.0% and 96.6%, respectively. This new approach shows an important increase in specificity without compromising the sensitivity when compared to previous studies.

  6. An Adaptive Pseudospectral Method for Fractional Order Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Mohammad Maleki

    2012-01-01

    Full Text Available An adaptive pseudospectral method is presented for solving a class of multiterm fractional boundary value problems (FBVP which involve Caputo-type fractional derivatives. The multiterm FBVP is first converted into a singular Volterra integrodifferential equation (SVIDE. By dividing the interval of the problem to subintervals, the unknown function is approximated using a piecewise interpolation polynomial with unknown coefficients which is based on shifted Legendre-Gauss (ShLG collocation points. Then the problem is reduced to a system of algebraic equations, thus greatly simplifying the problem. Further, some additional conditions are considered to maintain the continuity of the approximate solution and its derivatives at the interface of subintervals. In order to convert the singular integrals of SVIDE into nonsingular ones, integration by parts is utilized. In the method developed in this paper, the accuracy can be improved either by increasing the number of subintervals or by increasing the degree of the polynomial on each subinterval. Using several examples including Bagley-Torvik equation the proposed method is shown to be efficient and accurate.

  7. Recursive regularization step for high-order lattice Boltzmann methods

    Science.gov (United States)

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  8. Singular perturbations introduction to system order reduction methods with applications

    CERN Document Server

    Shchepakina, Elena; Mortell, Michael P

    2014-01-01

    These lecture notes provide a fresh approach to investigating singularly perturbed systems using asymptotic and geometrical techniques. It gives many examples and step-by-step techniques, which will help beginners move to a more advanced level. Singularly perturbed systems appear naturally in the modelling of many processes that are characterized by slow and fast motions simultaneously, for example, in fluid dynamics and nonlinear mechanics. This book’s approach consists in separating out the slow motions of the system under investigation. The result is a reduced differential system of lesser order. However, it inherits the essential elements of the qualitative behaviour of the original system. Singular Perturbations differs from other literature on the subject due to its methods and wide range of applications. It is a valuable reference for specialists in the areas of applied mathematics, engineering, physics, biology, as well as advanced undergraduates for the earlier parts of the book, and graduate stude...

  9. Variational methods for high-order multiphoton processes

    International Nuclear Information System (INIS)

    Gao, B.; Pan, C.; Liu, C.; Starace, A.F.

    1990-01-01

    Methods for applying the variationally stable procedure for Nth-order perturbative transition matrix elements of Gao and Starace [Phys. Rev. Lett. 61, 404 (1988); Phys. Rev. A 39, 4550 (1989)] to multiphoton processes involving systems other than atomic H are presented. Three specific cases are discussed: one-electron ions or atoms in which the electron--ion interaction is described by a central potential; two-electron ions or atoms in which the electronic states are described by the adiabatic hyperspherical representation; and closed-shell ions or atoms in which the electronic states are described by the multiconfiguration Hartree--Fock representation. Applications are made to the dynamic polarizability of He and the two-photon ionization cross section of Ar

  10. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  11. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    Science.gov (United States)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  12. Reliability Analysis Of Fire System On The Industry Facility By Use Fameca Method

    International Nuclear Information System (INIS)

    Sony T, D.T.; Situmorang, Johnny; Ismu W, Puradwi; Demon H; Mulyanto, Dwijo; Kusmono, Slamet; Santa, Sigit Asmara

    2000-01-01

    FAMECA is one of the analysis method to determine system reliability on the industry facility. Analysis is done by some procedure that is identification of component function, determination of failure mode, severity level and effect of their failure. Reliability value is determined by three combinations that is severity level, component failure value and critical component. Reliability of analysis has been done for fire system on the industry by FAMECA method. Critical component which identified is pump, air release valve, check valve, manual test valve, isolation valve, control system etc

  13. Optimal explicit strong stability preserving Runge–Kutta methods with high linear order and optimal nonlinear order

    KAUST Repository

    Gottlieb, Sigal; Grant, Zachary; Higgs, Daniel

    2015-01-01

    High order spatial discretizations with monotonicity properties are often desirable for the solution of hyperbolic PDEs. These methods can advantageously be coupled with high order strong stability preserving time discretizations. The search

  14. A study in the reliability analysis method for nuclear power plant structures (I)

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Byung Hwan; Choi, Seong Cheol; Shin, Ho Sang; Yang, In Hwan; Kim, Yi Sung; Yu, Young; Kim, Se Hun [Seoul, Nationl Univ., Seoul (Korea, Republic of)

    1999-03-15

    Nuclear power plant structures may be exposed to aggressive environmental effects that may cause their strength and stiffness to decrease over their service life. Although the physics of these damage mechanisms are reasonably well understood and quantitative evaluation of their effects on time-dependent structural behavior is possible in some instances, such evaluations are generally very difficult and remain novel. The assessment of existing steel containment in nuclear power plants for continued service must provide quantitative evidence that they are able to withstand future extreme loads during a service period with an acceptable level of reliability. Rational methodologies to perform the reliability assessment can be developed from mechanistic models of structural deterioration, using time-dependent structural reliability analysis to take loading and strength uncertainties into account. The final goal of this study is to develop the analysis method for the reliability of containment structures. The cause and mechanism of corrosion is first clarified and the reliability assessment method has been established. By introducing the equivalent normal distribution, the procedure of reliability analysis which can determine the failure probabilities has been established. The influence of design variables to reliability and the relation between the reliability and service life will be continued second year research.

  15. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

    Science.gov (United States)

    Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.

    2017-08-01

    While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

  16. GPUs, a new tool of acceleration in CFD: efficiency and reliability on smoothed particle hydrodynamics methods.

    Directory of Open Access Journals (Sweden)

    Alejandro C Crespo

    Full Text Available Smoothed Particle Hydrodynamics (SPH is a numerical method commonly used in Computational Fluid Dynamics (CFD to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs or Graphics Processor Units (GPUs, a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability.

  17. Bearing Procurement Analysis Method by Total Cost of Ownership Analysis and Reliability Prediction

    Science.gov (United States)

    Trusaji, Wildan; Akbar, Muhammad; Sukoyo; Irianto, Dradjad

    2018-03-01

    In making bearing procurement analysis, price and its reliability must be considered as decision criteria, since price determines the direct cost as acquisition cost and reliability of bearing determine the indirect cost such as maintenance cost. Despite the indirect cost is hard to identify and measured, it has high contribution to overall cost that will be incurred. So, the indirect cost of reliability must be considered when making bearing procurement analysis. This paper tries to explain bearing evaluation method with the total cost of ownership analysis to consider price and maintenance cost as decision criteria. Furthermore, since there is a lack of failure data when bearing evaluation phase is conducted, reliability prediction method is used to predict bearing reliability from its dynamic load rating parameter. With this method, bearing with a higher price but has higher reliability is preferable for long-term planning. But for short-term planning the cheaper one but has lower reliability is preferable. This contextuality can give rise to conflict between stakeholders. Thus, the planning horizon needs to be agreed by all stakeholder before making a procurement decision.

  18. Assessment of Advanced Life Support competence when combining different test methods--reliability and validity

    DEFF Research Database (Denmark)

    Ringsted, C; Lippert, F; Hesselfeldt, R

    2007-01-01

    Cardiac Arrest Simulation Test (CASTest) scenarios for the assessments according to guidelines 2005. AIMS: To analyse the reliability and validity of the individual sub-tests provided by ERC and to find a combination of MCQ and CASTest that provides a reliable and valid single effect measure of ALS...... that possessed high reliability, equality of test sets, and ability to discriminate between the two groups of supposedly different ALS competence. CONCLUSIONS: ERC sub-tests of ALS competence possess sufficient reliability and validity. A combined ALS score with equal weighting of one MCQ and one CASTest can...... competence. METHODS: Two groups of participants were included in this randomised, controlled experimental study: a group of newly graduated doctors, who had not taken the ALS course (N=17) and a group of students, who had passed the ALS course 9 months before the study (N=16). Reliability in terms of inter...

  19. A Novel Reliability Enhanced Handoff Method in Future Wireless Heterogeneous Networks

    Directory of Open Access Journals (Sweden)

    Wang YuPeng

    2016-01-01

    Full Text Available As the demand increases, future networks will follow the trends of network variety and service flexibility, which requires heterogeneous type of network deployment and reliable communication method. In practice, most communication failure happens due to the bad radio link quality, i.e., high-speed users suffers a lot on the problem of radio link failure, which causes the problem of communication interrupt and radio link recovery. To make the communication more reliable, especially for the high mobility users, we propose a novel communication handoff mechanism to reduce the occurrence of service interrupt. Based on computer simulation, we find that the reliability on the service is greatly improved.

  20. Reliability analysis of idealized tunnel support system using probability-based methods with case studies

    NARCIS (Netherlands)

    Gharouni-Nik, M.; Naeimi, M.; Ahadi, S.; Alimoradi, Z.

    2014-01-01

    In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute

  1. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  2. An attempt to use FMEA method for an approximate reliability assessment of machinery

    Directory of Open Access Journals (Sweden)

    Przystupa Krzysztof

    2017-01-01

    Full Text Available The paper presents a modified FMEA (Failure Mode and Effect Analysis method to assess reliability of the components that make up a wrench type 2145: MAX Impactol TM Driver Ingersoll Rand Company. This case concerns the analysis of reliability in conditions, when full service data is not known. The aim of the study is to determine the weakest element in the design of the tool.

  3. Method for assessing software reliability of the document management system using the RFID technology

    Directory of Open Access Journals (Sweden)

    Kiedrowicz Maciej

    2016-01-01

    Full Text Available The deliberations presented in this study refer to the method for assessing software reliability of the docu-ment management system, using the RFID technology. A method for determining the reliability structure of the dis-cussed software, understood as the index vector for assessing reliability of its components, was proposed. The model of the analyzed software is the control transfer graph, in which the probability of activating individual components during the system's operation results from the so-called operational profile, which characterizes the actual working environment. The reliability structure is established as a result of the solution of a specific mathematical software task. The knowledge of the reliability structure of the software makes it possible to properly plan the time and finan-cial expenses necessary to build the software, which would meet the reliability requirements. The application of the presented method is illustrated by the number example, corresponding to the software reality of the RFID document management system.

  4. Weighted graph based ordering techniques for preconditioned conjugate gradient methods

    Science.gov (United States)

    Clift, Simon S.; Tang, Wei-Pai

    1994-01-01

    We describe the basis of a matrix ordering heuristic for improving the incomplete factorization used in preconditioned conjugate gradient techniques applied to anisotropic PDE's. Several new matrix ordering techniques, derived from well-known algorithms in combinatorial graph theory, which attempt to implement this heuristic, are described. These ordering techniques are tested against a number of matrices arising from linear anisotropic PDE's, and compared with other matrix ordering techniques. A variation of RCM is shown to generally improve the quality of incomplete factorization preconditioners.

  5. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    Science.gov (United States)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  6. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    DEFF Research Database (Denmark)

    Petersen, Bent; Petersen, Thomas Nordahl; Andersen, Pernille

    2009-01-01

    : The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability...... comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0.79 and 0.74 are obtained using our and the compared method, respectively. This tendency is true for any selected subset....

  7. A method to evaluate performance reliability of individual subjects in laboratory research applied to work settings.

    Science.gov (United States)

    1978-10-01

    This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...

  8. Complex method to calculate objective assessments of information systems protection to improve expert assessments reliability

    Science.gov (United States)

    Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.

    2018-01-01

    The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.

  9. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    Directory of Open Access Journals (Sweden)

    Nielsen Morten

    2009-07-01

    Full Text Available Abstract Background Estimation of the reliability of specific real value predictions is nontrivial and the efficacy of this is often questionable. It is important to know if you can trust a given prediction and therefore the best methods associate a prediction with a reliability score or index. For discrete qualitative predictions, the reliability is conventionally estimated as the difference between output scores of selected classes. Such an approach is not feasible for methods that predict a biological feature as a single real value rather than a classification. As a solution to this challenge, we have implemented a method that predicts the relative surface accessibility of an amino acid and simultaneously predicts the reliability for each prediction, in the form of a Z-score. Results An ensemble of artificial neural networks has been trained on a set of experimentally solved protein structures to predict the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. Conclusion The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability score with the individual predictions. However, our implementation of reliability scores in the form of a Z-score is shown to be the more informative measure for discriminating good predictions from bad ones in the entire range from completely buried to fully exposed amino acids. This is evident when comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0

  10. Self-Tuning Method for Increased Obstacle Detection Reliability Based on Internet of Things LiDAR Sensor Models.

    Science.gov (United States)

    Castaño, Fernando; Beruvides, Gerardo; Villalonga, Alberto; Haber, Rodolfo E

    2018-05-10

    On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the 'Internet of Things' (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds.

  11. Investigation of Reliabilities of Bolt Distances for Bolted Structural Steel Connections by Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Ertekin Öztekin Öztekin

    2015-12-01

    Full Text Available Design of the distance of bolts to each other and design of the distance of bolts to the edge of connection plates are made based on minimum and maximum boundary values proposed by structural codes. In this study, reliabilities of those distances were investigated. For this purpose, loading types, bolt types and plate thicknesses were taken as variable parameters. Monte Carlo Simulation (MCS method was used in the reliability computations performed for all combination of those parameters. At the end of study, all reliability index values for all those distances were presented in graphics and tables. Results obtained from this study compared with the values proposed by some structural codes and finally some evaluations were made about those comparisons. Finally, It was emphasized in the end of study that, it would be incorrect of the usage of the same bolt distances in the both traditional designs and the higher reliability level designs.

  12. A Bayesian reliability evaluation method with integrated accelerated degradation testing and field information

    International Nuclear Information System (INIS)

    Wang, Lizhi; Pan, Rong; Li, Xiaoyang; Jiang, Tongmin

    2013-01-01

    Accelerated degradation testing (ADT) is a common approach in reliability prediction, especially for products with high reliability. However, oftentimes the laboratory condition of ADT is different from the field condition; thus, to predict field failure, one need to calibrate the prediction made by using ADT data. In this paper a Bayesian evaluation method is proposed to integrate the ADT data from laboratory with the failure data from field. Calibration factors are introduced to calibrate the difference between the lab and the field conditions so as to predict a product's actual field reliability more accurately. The information fusion and statistical inference procedure are carried out through a Bayesian approach and Markov chain Monte Carlo methods. The proposed method is demonstrated by two examples and the sensitivity analysis to prior distribution assumption

  13. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  14. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  15. Third order TRANSPORT with MAD [Methodical Accelerator Design] input

    International Nuclear Information System (INIS)

    Carey, D.C.

    1988-01-01

    This paper describes computer-aided design codes for particle accelerators. Among the topics discussed are: input beam description; parameters and algebraic expressions; the physical elements; beam lines; operations; and third-order transfer matrix

  16. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  17. Order Reduction in High-Order Runge-Kutta Methods for Initial Boundary Value Problems

    OpenAIRE

    Rosales, Rodolfo Ruben; Seibold, Benjamin; Shirokoff, David; Zhou, Dong

    2017-01-01

    This paper studies the order reduction phenomenon for initial-boundary-value problems that occurs with many Runge-Kutta time-stepping schemes. First, a geometric explanation of the mechanics of the phenomenon is provided: the approximation error develops boundary layers, induced by a mismatch between the approximation error in the interior and at the boundaries. Second, an analysis of the modes of the numerical scheme is conducted, which explains under which circumstances boundary layers pers...

  18. Development of a reliability-analysis method for category I structures

    International Nuclear Information System (INIS)

    Shinozuka, M.; Kako, T.; Hwang, H.; Reich, M.

    1983-01-01

    The present paper develops a reliability analysis method for category I nuclear structures, particularly for reinforced concrete containment structures subjected to various load combinations. The loads considered here include dead loads, accidental internal pressure and earthquake ground acceleration. For mathematical tractability, an earthquake occurrence is assumed to be governed by the Poisson arrival law, while its acceleration history is idealized as a Gaussian vector process of finite duration. A vector process consists of three component processes, each with zero mean. The second order statistics of this process are specified by a three-by-three spectral density matrix with a multiplying factor representing the overall intensity of the ground acceleration. With respect to accidental internal pressure, the following assumptions are made: (a) it occurs in accordance with the Poisson law; (b) its intensity and duration are random; and (c) its temporal rise and fall behaviors are such that a quasi-static structural analysis applies. A dead load is considered to be a deterministic constant

  19. Structural system reliability calculation using a probabilistic fault tree analysis method

    Science.gov (United States)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  20. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.

    Science.gov (United States)

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2015-05-01

    Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.

  1. A human reliability based usability evaluation method for safety-critical software

    International Nuclear Information System (INIS)

    Boring, R. L.; Tran, T. Q.; Gertman, D. I.; Ragsdale, A.

    2006-01-01

    Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thus allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)

  2. An automated method for estimating reliability of grid systems using Bayesian networks

    International Nuclear Information System (INIS)

    Doguc, Ozge; Emmanuel Ramirez-Marquez, Jose

    2012-01-01

    Grid computing has become relevant due to its applications to large-scale resource sharing, wide-area information transfer, and multi-institutional collaborating. In general, in grid computing a service requests the use of a set of resources, available in a grid, to complete certain tasks. Although analysis tools and techniques for these types of systems have been studied, grid reliability analysis is generally computation-intensive to obtain due to the complexity of the system. Moreover, conventional reliability models have some common assumptions that cannot be applied to the grid systems. Therefore, new analytical methods are needed for effective and accurate assessment of grid reliability. This study presents a new method for estimating grid service reliability, which does not require prior knowledge about the grid system structure unlike the previous studies. Moreover, the proposed method does not rely on any assumptions about the link and node failure rates. This approach is based on a data-mining algorithm, the K2, to discover the grid system structure from raw historical system data, that allows to find minimum resource spanning trees (MRST) within the grid then, uses Bayesian networks (BN) to model the MRST and estimate grid service reliability.

  3. Inter- and intra- observer reliability of risk assessment of repetitive work without an explicit method.

    Science.gov (United States)

    Eliasson, Kristina; Palm, Peter; Nyman, Teresia; Forsman, Mikael

    2017-07-01

    A common way to conduct practical risk assessments is to observe a job and report the observed long term risks for musculoskeletal disorders. The aim of this study was to evaluate the inter- and intra-observer reliability of ergonomists' risk assessments without the support of an explicit risk assessment method. Twenty-one experienced ergonomists assessed the risk level (low, moderate, high risk) of eight upper body regions, as well as the global risk of 10 video recorded work tasks. Intra-observer reliability was assessed by having nine of the ergonomists repeat the procedure at least three weeks after the first assessment. The ergonomists made their risk assessment based on his/her experience and knowledge. The statistical parameters of reliability included agreement in %, kappa, linearly weighted kappa, intraclass correlation and Kendall's coefficient of concordance. The average inter-observer agreement of the global risk was 53% and the corresponding weighted kappa (K w ) was 0.32, indicating fair reliability. The intra-observer agreement was 61% and 0.41 (K w ). This study indicates that risk assessments of the upper body, without the use of an explicit observational method, have non-acceptable reliability. It is therefore recommended to use systematic risk assessment methods to a higher degree. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. A Fast Optimization Method for Reliability and Performance of Cloud Services Composition Application

    Directory of Open Access Journals (Sweden)

    Zhao Wu

    2013-01-01

    Full Text Available At present the cloud computing is one of the newest trends of distributed computation, which is propelling another important revolution of software industry. The cloud services composition is one of the key techniques in software development. The optimization for reliability and performance of cloud services composition application, which is a typical stochastic optimization problem, is confronted with severe challenges due to its randomness and long transaction, as well as the characteristics of the cloud computing resources such as openness and dynamic. The traditional reliability and performance optimization techniques, for example, Markov model and state space analysis and so forth, have some defects such as being too time consuming and easy to cause state space explosion and unsatisfied the assumptions of component execution independence. To overcome these defects, we propose a fast optimization method for reliability and performance of cloud services composition application based on universal generating function and genetic algorithm in this paper. At first, a reliability and performance model for cloud service composition application based on the multiple state system theory is presented. Then the reliability and performance definition based on universal generating function is proposed. Based on this, a fast reliability and performance optimization algorithm is presented. In the end, the illustrative examples are given.

  5. Reliability-Based Topology Optimization Using Stochastic Response Surface Method with Sparse Grid Design

    Directory of Open Access Journals (Sweden)

    Qinghai Zhao

    2015-01-01

    Full Text Available A mathematical framework is developed which integrates the reliability concept into topology optimization to solve reliability-based topology optimization (RBTO problems under uncertainty. Two typical methodologies have been presented and implemented, including the performance measure approach (PMA and the sequential optimization and reliability assessment (SORA. To enhance the computational efficiency of reliability analysis, stochastic response surface method (SRSM is applied to approximate the true limit state function with respect to the normalized random variables, combined with the reasonable design of experiments generated by sparse grid design, which was proven to be an effective and special discretization technique. The uncertainties such as material property and external loads are considered on three numerical examples: a cantilever beam, a loaded knee structure, and a heat conduction problem. Monte-Carlo simulations are also performed to verify the accuracy of the failure probabilities computed by the proposed approach. Based on the results, it is demonstrated that application of SRSM with SGD can produce an efficient reliability analysis in RBTO which enables a more reliable design than that obtained by DTO. It is also found that, under identical accuracy, SORA is superior to PMA in view of computational efficiency.

  6. A Method to Increase Drivers' Trust in Collision Warning Systems Based on Reliability Information of Sensor

    Science.gov (United States)

    Tsutsumi, Shigeyoshi; Wada, Takahiro; Akita, Tokihiko; Doi, Shun'ichi

    Driver's workload tends to be increased during driving under complicated traffic environments like a lane change. In such cases, rear collision warning is effective for reduction of cognitive workload. On the other hand, it is pointed out that false alarm or missing alarm caused by sensor errors leads to decrease of driver' s trust in the warning system and it can result in low efficiency of the system. Suppose that reliability information of the sensor is provided in real-time. In this paper, we propose a new warning method to increase driver' s trust in the system even with low sensor reliability utilizing the sensor reliability information. The effectiveness of the warning methods is shown by driving simulator experiments.

  7. Identification of fractional order systems using modulating functions method

    KAUST Repository

    Liu, Dayan; Laleg-Kirati, Taous-Meriem; Gibaru, O.; Perruquetti, Wilfrid

    2013-01-01

    can be transferred into the ones of the modulating functions. By choosing a set of modulating functions, a linear system of algebraic equations is obtained. Hence, the unknown parameters of a fractional order system can be estimated by solving a linear

  8. An accurate scheme by block method for third order ordinary ...

    African Journals Online (AJOL)

    problems of ordinary differential equations is presented in this paper. The approach of collocation approximation is adopted in the derivation of the scheme and then the scheme is applied as simultaneous integrator to special third order initial value problem of ordinary differential equations. This implementation strategy is ...

  9. A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.

    Science.gov (United States)

    Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin

    2018-05-01

    Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.

  10. Proceeding of 35th domestic symposium on applications of structural reliability and risk assessment methods to nuclear power plants

    International Nuclear Information System (INIS)

    2005-06-01

    As the 35th domestic symposium of Atomic Energy Research Committee, the Japan Welding Engineering Society, the symposium was held titled as Applications of structural reliability/risk assessment methods to nuclear energy'. Six speakers gave lectures titled as 'Structural reliability and risk assessment methods', 'Risk-informed regulation of US nuclear energy and role of probabilistic risk assessment', 'Reliability and risk assessment methods in chemical plants', 'Practical structural design methods based on reliability in architectural and civil areas', 'Maintenance activities based on reliability in thermal power plants' and 'LWR maintenance strategies based on Probabilistic Fracture Mechanics'. (T. Tanaka)

  11. A new method for ordering triangular fuzzy numbers

    Directory of Open Access Journals (Sweden)

    S.H. Nasseri

    2010-09-01

    Full Text Available Ranking fuzzy numbers plays a very important role in linguistic decision making and other fuzzy application systems. In spite of many ranking methods, no one can rank fuzzy numbers with human intuition consistently in all cases. Shortcoming are found in some of the convenient methods for ranking triangular fuzzy numbers such as the coefficient of variation (CV index, distance between fuzzy sets, centroid point and original point, and also weighted mean value. In this paper, we introduce a new method for ranking triangular fuzzy number to overcome the shortcomings of the previous techniques. Finally, we compare our method with some convenient methods for ranking fuzzy numbers to illustrate the advantage our method.

  12. Extended block diagram method for a multi-state system reliability assessment

    International Nuclear Information System (INIS)

    Lisnianski, Anatoly

    2007-01-01

    The presented method extends the classical reliability block diagram method to a repairable multi-state system. It is very suitable for engineering applications since the procedure is well formalized and based on the natural decomposition of the entire multi-state system (the system is represented as a collection of its elements). Until now, the classical block diagram method did not provide the reliability assessment for the repairable multi-state system. The straightforward stochastic process methods are very difficult for engineering application in such cases due to the 'dimension damnation'-huge number of system states. The suggested method is based on the combined random processes and the universal generating function technique and drastically reduces the number of states in the multi-state model

  13. [Knowledge of university students in Szeged, Hungary about reliable contraceptive methods and sexually transmitted diseases].

    Science.gov (United States)

    Devosa, Iván; Kozinszky, Zoltán; Vanya, Melinda; Szili, Károly; Fáyné Dombi, Alice; Barabás, Katalin

    2016-04-03

    Promiscuity and lack of use of reliable contraceptive methods increase the probability of sexually transmitted diseases and the risk of unwanted pregnancies, which are quite common among university students. The aim of the study was to assess the knowledge of university students about reliable contraceptive methods and sexually transmitted diseases, and to assess the effectiveness of the sexual health education in secondary schools, with specific focus on the education held by peers. An anonymous, self-administered questionnaire survey was carried out in a randomized sample of students at the University of Szeged (n = 472, 298 women and 174 men, average age 21 years) between 2009 and 2011. 62.1% of the respondents declared that reproductive health education lessons in high schools held by peers were reliable and authentic source of information, 12.3% considered as a less reliable source, and 25.6% defined the school health education as irrelevant source. Among those, who considered the health education held by peers as a reliable source, there were significantly more females (69.3% vs. 46.6%, p = 0.001), significantly fewer lived in cities (83.6% vs. 94.8%, p = 0.025), and significantly more responders knew that Candida infection can be transmitted through sexual intercourse (79.5% versus 63.9%, p = 0.02) as compared to those who did not consider health education held by peers as a reliable source. The majority of respondents obtained knowledge about sexual issues from the mass media. Young people who considered health educating programs reliable were significantly better informed about Candida disease.

  14. Noninvasive Hemoglobin Monitoring: A Rapid, Reliable, and Cost-Effective Method Following Total Joint Replacement.

    Science.gov (United States)

    Martin, J Ryan; Camp, Christopher L; Stitz, Amber; Young, Ernest Y; Abdel, Matthew P; Taunton, Michael J; Trousdale, Robert T

    2016-03-02

    Noninvasive hemoglobin (nHgb) monitoring was initially introduced in the intensive care setting as a means of rapidly assessing Hgb values without performing a blood draw. We conducted a prospective analysis to compare reliability, cost, and patient preference between nHgb monitoring and invasive Hgb (iHgb) monitoring performed via a traditional blood draw. We enrolled 100 consecutive patients undergoing primary or revision total hip or total knee arthroplasty. On postoperative day 1, nHgb and iHgb values were obtained within thirty minutes of one another. iHgb and nHgb values, cost, patient satisfaction, and the duration of time required to obtain each reading were recorded. The concordance correlation coefficient (CCC) was utilized to evaluate the agreement of the two Hgb measurement methods. Paired t tests and Wilcoxon signed-rank tests were utilized to compare mean Hgb values, time, and pain for all readings. The mean Hgb values did not differ significantly between the two measurement methods: the mean iHgb value (and standard deviation) was 11.3 ± 1.4 g/dL (range, 8.2 to 14.3 g/dL), and the mean nHgb value was 11.5 ± 1.8 g/dL (range, 7.0 to 16.0 g/dL) (p = 0.11). The CCC between the two Hgb methods was 0.69. One hundred percent of the patients with an nHgb value of ≥ 10.5 g/dL had an iHgb value of >8.0 g/dL. The mean time to obtain an Hgb value was 0.9 minute for the nHgb method and 51.1 minutes for the iHgb method (p measurement, resulting in a savings of $26 per Hgb assessment when the noninvasive method is used. Noninvasive Hgb monitoring was found to be more efficient, less expensive, and preferred by patients compared with iHgb monitoring. Providers could consider screening total joint arthroplasty patients with nHgb monitoring and only order iHgb measurement if the nHgb value is protocol had been applied to the first blood draw in our 100 patients, approximately $2000 would have been saved. Extrapolated to the U.S. total joint arthroplasty practice

  15. A high order regularisation method for solving the Poisson equation and selected applications using vortex methods

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm

    ring dynamics is presented based on the alignment of the vorticity vector with the principal axis of the strain rate tensor.A novel iterative implementation of the Brinkman penalisation method is introduced for the enforcement of a fluid-solid interface in re-meshed vortex methods. The iterative scheme...... is included to explicitly fulfil the kinematic constraints of the flow field. The high order, unbounded particle-mesh based vortex method is used to simulate the instability, transition to turbulence and eventual destruction of a single vortex ring. From the simulation data, a novel analysis on the vortex...

  16. Photoswitchable method for the ordered attachment of proteins to surfaces

    Science.gov (United States)

    Camarero, Julio A.; De Yoreo, James J.; Kwon, Youngeun

    2010-04-20

    Described herein is a method for the attachment of proteins to any solid support with control over the orientation of the attachment. The method is extremely efficient, not requiring the previous purification of the protein to be attached, and can be activated by UV-light. Spatially addressable arrays of multiple protein components can be generated by using standard photolithographic techniques.

  17. Photoswitchable method for the ordered attachment of proteins to surfaces

    Science.gov (United States)

    Camarero, Julio A [Livermore, CA; DeYoreo, James J [Clayton, CA; Kwon, Youngeun [Livermore, CA

    2011-07-05

    Described herein is a method for the attachment of proteins to any solid support with control over the orientation of the attachment. The method is extremely efficient, not requiring the previous purification of the protein to be attached, and can be activated by UV-light. Spatially addressable arrays of multiple protein components can be generated by using standard photolithographic techniques.

  18. A new method for improving the reliability of fracture toughness surveillance of nuclear pressure vessel by neutron irradiated embrittlement

    International Nuclear Information System (INIS)

    Zhang Xinping; Shi Yaowu

    1992-01-01

    In order to obtain more information from neutron irradiated sample specimens and raise the reliability of fracture toughness surveillance test, it has more important significance to repeatedly exploit the broken Charpy-size specimen which had been tested in surveillance test. In this work, on the renewing design and utilization for Charpy-size specimens, 9 data of fracture toughness can be gained from one pre-cracked side-grooved Charpy-size specimen while at the preset usually only 1 to 3 data of fracture toughness can be obtained from one Chharpy-size specimen. Thus, it is found that the new method would obviously improve the reliability of fracture toughness surveillance test and evaluation. Some factors which affect the reasonable design of pre-cracked deep side-groove Charpy-size compound specimen have been discussed

  19. A study of digital hardware architectures for nuclear reactors protection systems applications - reliability and safety analysis methods

    International Nuclear Information System (INIS)

    Benko, Pedro Luiz

    1997-01-01

    A study of digital hardware architectures, including experience in many countries, topologies and solutions to interface circuits for protection systems of nuclear reactors is presented. Methods for developing digital systems architectures based on fault tolerant and safety requirements is proposed. Directives for assessing such conditions are suggested. Techniques and the most common tools employed in reliability, safety evaluation and modeling of hardware architectures is also presented. Markov chain modeling is used to evaluate the reliability of redundant architectures. In order to estimate software quality, several mechanisms to be used in design, specification, and validation and verification (V and V) procedures are suggested. A digital protection system architecture has been analyzed as a case study. (author)

  20. Scale Sensitivity and Question Order in the Contingent Valuation Method

    OpenAIRE

    Andersson, Henrik; Svensson, Mikael

    2010-01-01

    This study examines the effect on respondents' willingness to pay to reduce mortality risk by the order of the questions in a stated preference study. Using answers from an experiment conducted on a Swedish sample where respondents' cognitive ability was measured and where they participated in a contingent valuation survey, it was found that scale sensitivity is strongest when respondents are asked about a smaller risk reduction first ('bottom-up' approach). This contradicts some previous evi...

  1. Transition from Partial Factors Method to Simulation-Based Reliability Assessment in Structural Design

    Czech Academy of Sciences Publication Activity Database

    Marek, Pavel; Guštar, M.; Permaul, K.

    1999-01-01

    Roč. 14, č. 1 (1999), s. 105-118 ISSN 0266-8920 R&D Projects: GA ČR GA103/94/0562; GA ČR GV103/96/K034 Keywords : reliability * safety * failure * durability * Monte Carlo method Subject RIV: JM - Building Engineering Impact factor: 0.522, year: 1999

  2. Evaluation of the reliability of Levine method of wound swab for ...

    African Journals Online (AJOL)

    The aim of this paper is to evaluate the reliability of Levine swab in accurate identification of microorganisms present in a wound and identify the necessity for further studies in this regard. Methods: A semi structured questionnaire was administered and physical examination was performed on patients with chronic wounds ...

  3. Reliability analysis based on a novel density estimation method for structures with correlations

    Directory of Open Access Journals (Sweden)

    Baoyu LI

    2017-06-01

    Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.

  4. Development of advanced methods and related software for human reliability evaluation within probabilistic safety analyses

    International Nuclear Information System (INIS)

    Kosmowski, K.T.; Mertens, J.; Degen, G.; Reer, B.

    1994-06-01

    Human Reliability Analysis (HRA) is an important part of Probabilistic Safety Analysis (PSA). The first part of this report consists of an overview of types of human behaviour and human error including the effect of significant performance shaping factors on human reliability. Particularly with regard to safety assessments for nuclear power plants a lot of HRA methods have been developed. The most important of these methods are presented and discussed in the report, together with techniques for incorporating HRA into PSA and with models of operator cognitive behaviour. Based on existing HRA methods the concept of a software system is described. For the development of this system the utilization of modern programming tools is proposed; the essential goal is the effective application of HRA methods. A possible integration of computeraided HRA within PSA is discussed. The features of Expert System Technology and examples of applications (PSA, HRA) are presented in four appendices. (orig.) [de

  5. Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Shaoyun Ge

    2014-01-01

    Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.

  6. Assessment of Electronic Circuits Reliability Using Boolean Truth Table Modeling Method

    International Nuclear Information System (INIS)

    EI-Shanshoury, A.I.

    2011-01-01

    This paper explores the use of Boolean Truth Table modeling Method (BTTM) in the analysis of qualitative data. It is widely used in certain fields especially in the fields of electrical and electronic engineering. Our work focuses on the evaluation of power supply circuit reliability using (BTTM) which involves systematic attempts to falsify and identify hypotheses on the basis of truth tables constructed from qualitative data. Reliability parameters such as the system's failure rates for the power supply case study are estimated. All possible state combinations (operating and failed states) of the major components in the circuit were listed and their effects on overall system were studied

  7. Methods for estimating the reliability of the RBMK fuel assemblies and elements

    International Nuclear Information System (INIS)

    Klemin, A.I.; Sitkarev, A.G.

    1985-01-01

    Applied non-parametric methods for calculation of point and interval estimations for the basic nomenclature of reliability factors for the RBMK fuel assemblies and elements are described. As the fuel assembly and element reliability factors, the average lifetime is considered at a preset operating time up to unloading due to fuel burnout as well as the average lifetime at the reactor transient operation and at the steady-state fuel reloading mode of reactor operation. The formulae obtained are included into the special standardized engineering documentation

  8. Strong Stability Preserving Explicit Runge--Kutta Methods of Maximal Effective Order

    KAUST Repository

    Hadjimichael, Yiannis

    2013-07-23

    We apply the concept of effective order to strong stability preserving (SSP) explicit Runge--Kutta methods. Relative to classical Runge--Kutta methods, methods with an effective order of accuracy are designed to satisfy a relaxed set of order conditions but yield higher order accuracy when composed with special starting and stopping methods. We show that this allows the construction of four-stage SSP methods with effective order four (such methods cannot have classical order four). However, we also prove that effective order five methods---like classical order five methods---require the use of nonpositive weights and so cannot be SSP. By numerical optimization, we construct explicit SSP Runge--Kutta methods up to effective order four and establish the optimality of many of them. Numerical experiments demonstrate the validity of these methods in practice.

  9. Strong Stability Preserving Explicit Runge--Kutta Methods of Maximal Effective Order

    KAUST Repository

    Hadjimichael, Yiannis; Macdonald, Colin B.; Ketcheson, David I.; Verner, James H.

    2013-01-01

    We apply the concept of effective order to strong stability preserving (SSP) explicit Runge--Kutta methods. Relative to classical Runge--Kutta methods, methods with an effective order of accuracy are designed to satisfy a relaxed set of order conditions but yield higher order accuracy when composed with special starting and stopping methods. We show that this allows the construction of four-stage SSP methods with effective order four (such methods cannot have classical order four). However, we also prove that effective order five methods---like classical order five methods---require the use of nonpositive weights and so cannot be SSP. By numerical optimization, we construct explicit SSP Runge--Kutta methods up to effective order four and establish the optimality of many of them. Numerical experiments demonstrate the validity of these methods in practice.

  10. Screening, sensitivity, and uncertainty for the CREAM method of Human Reliability Analysis

    International Nuclear Information System (INIS)

    Bedford, Tim; Bayley, Clare; Revie, Matthew

    2013-01-01

    This paper reports a sensitivity analysis of the Cognitive Reliability and Error Analysis Method for Human Reliability Analysis. We consider three different aspects: the difference between the outputs of the Basic and Extended methods, on the same HRA scenario; the variability in outputs through the choices made for common performance conditions (CPCs); and the variability in outputs through the assignment of choices for cognitive function failures (CFFs). We discuss the problem of interpreting categories when applying the method, compare its quantitative structure to that of first generation methods and discuss also how dependence is modelled with the approach. We show that the control mode intervals used in the Basic method are too narrow to be consistent with the Extended method. This motivates a new screening method that gives improved accuracy with respect to the Basic method, in the sense that (on average) halves the uncertainty associated with the Basic method. We make some observations on the design of a screening method that are generally applicable in Risk Analysis. Finally, we propose a new method of combining CPC weights with nominal probabilities so that the calculated probabilities are always in range (i.e. between 0 and 1), while satisfying sensible properties that are consistent with the overall CREAM method

  11. Higher order temporal finite element methods through mixed formalisms.

    Science.gov (United States)

    Kim, Jinkyu

    2014-01-01

    The extended framework of Hamilton's principle and the mixed convolved action principle provide new rigorous weak variational formalism for a broad range of initial boundary value problems in mathematical physics and mechanics. In this paper, their potential when adopting temporally higher order approximations is investigated. The classical single-degree-of-freedom dynamical systems are primarily considered to validate and to investigate the performance of the numerical algorithms developed from both formulations. For the undamped system, all the algorithms are symplectic and unconditionally stable with respect to the time step. For the damped system, they are shown to be accurate with good convergence characteristics.

  12. A method of bias correction for maximal reliability with dichotomous measures.

    Science.gov (United States)

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  13. A Simple and Reliable Method of Design for Standalone Photovoltaic Systems

    Science.gov (United States)

    Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.

    2017-06-01

    Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.

  14. A fast approximation method for reliability analysis of cold-standby systems

    International Nuclear Information System (INIS)

    Wang, Chaonan; Xing, Liudong; Amari, Suprasad V.

    2012-01-01

    Analyzing reliability of large cold-standby systems has been a complicated and time-consuming task, especially for systems with components having non-exponential time-to-failure distributions. In this paper, an approximation model, which is based on the central limit theorem, is presented for the reliability analysis of binary cold-standby systems. The proposed model can estimate the reliability of large cold-standby systems with binary-state components having arbitrary time-to-failure distributions in an efficient and easy way. The accuracy and efficiency of the proposed method are illustrated using several different types of distributions for both 1-out-of-n and k-out-of-n cold-standby systems.

  15. A dynamic discretization method for reliability inference in Dynamic Bayesian Networks

    International Nuclear Information System (INIS)

    Zhu, Jiandao; Collette, Matthew

    2015-01-01

    The material and modeling parameters that drive structural reliability analysis for marine structures are subject to a significant uncertainty. This is especially true when time-dependent degradation mechanisms such as structural fatigue cracking are considered. Through inspection and monitoring, information such as crack location and size can be obtained to improve these parameters and the corresponding reliability estimates. Dynamic Bayesian Networks (DBNs) are a powerful and flexible tool to model dynamic system behavior and update reliability and uncertainty analysis with life cycle data for problems such as fatigue cracking. However, a central challenge in using DBNs is the need to discretize certain types of continuous random variables to perform network inference while still accurately tracking low-probability failure events. Most existing discretization methods focus on getting the overall shape of the distribution correct, with less emphasis on the tail region. Therefore, a novel scheme is presented specifically to estimate the likelihood of low-probability failure events. The scheme is an iterative algorithm which dynamically partitions the discretization intervals at each iteration. Through applications to two stochastic crack-growth example problems, the algorithm is shown to be robust and accurate. Comparisons are presented between the proposed approach and existing methods for the discretization problem. - Highlights: • A dynamic discretization method is developed for low-probability events in DBNs. • The method is compared to existing approaches on two crack growth problems. • The method is shown to improve on existing methods for low-probability events

  16. A Reliable Method to Measure Lip Height Using Photogrammetry in Unilateral Cleft Lip Patients.

    Science.gov (United States)

    van der Zeeuw, Frederique; Murabit, Amera; Volcano, Johnny; Torensma, Bart; Patel, Brijesh; Hay, Norman; Thorburn, Guy; Morris, Paul; Sommerlad, Brian; Gnarra, Maria; van der Horst, Chantal; Kangesu, Loshan

    2015-09-01

    There is still no reliable tool to determine the outcome of the repaired unilateral cleft lip (UCL). The aim of this study was therefore to develop an accurate, reliable tool to measure vertical lip height from photographs. The authors measured the vertical height of the cutaneous and vermilion parts of the lip in 72 anterior-posterior view photographs of 17 patients with repairs to a UCL. Points on the lip's white roll and vermillion were marked on both the cleft and the noncleft sides on each image. Two new concepts were tested. First, photographs were standardized using the horizontal (medial to lateral) eye fissure width (EFW) for calibration. Second, the authors tested the interpupillary line (IPL) and the alar base line (ABL) for their reliability as horizontal lines of reference. Measurements were taken by 2 independent researchers, at 2 different time points each. Overall 2304 data points were obtained and analyzed. Results showed that the method was very effective in measuring the height of the lip on the cleft side with the noncleft side. When using the IPL, inter- and intra-rater reliability was 0.99 to 1.0, with the ABL it varied from 0.91 to 0.99 with one exception at 0.84. The IPL was easier to define because in some subjects the overhanging nasal tip obscured the alar base and gave more consistent measurements possibly because the reconstructed alar base was sometimes indistinct. However, measurements from the IPL can only give the percentage difference between the left and right sides of the lip, whereas those from the ABL can also give exact measurements. Patient examples were given that show how the measurements correlate with clinical assessment. The authors propose this method of photogrammetry with the innovative use of the IPL as a reliable horizontal plane and use of the EFW for calibration as a useful and reliable tool to assess the outcome of UCL repair.

  17. Identification of reliable gridded reference data for statistical downscaling methods in Alberta

    Science.gov (United States)

    Eum, H. I.; Gupta, A.

    2017-12-01

    Climate models provide essential information to assess impacts of climate change at regional and global scales. However, statistical downscaling methods have been applied to prepare climate model data for various applications such as hydrologic and ecologic modelling at a watershed scale. As the reliability and (spatial and temporal) resolution of statistically downscaled climate data mainly depend on a reference data, identifying the most reliable reference data is crucial for statistical downscaling. A growing number of gridded climate products are available for key climate variables which are main input data to regional modelling systems. However, inconsistencies in these climate products, for example, different combinations of climate variables, varying data domains and data lengths and data accuracy varying with physiographic characteristics of the landscape, have caused significant challenges in selecting the most suitable reference climate data for various environmental studies and modelling. Employing various observation-based daily gridded climate products available in public domain, i.e. thin plate spline regression products (ANUSPLIN and TPS), inverse distance method (Alberta Townships), and numerical climate model (North American Regional Reanalysis) and an optimum interpolation technique (Canadian Precipitation Analysis), this study evaluates the accuracy of the climate products at each grid point by comparing with the Adjusted and Homogenized Canadian Climate Data (AHCCD) observations for precipitation, minimum and maximum temperature over the province of Alberta. Based on the performance of climate products at AHCCD stations, we ranked the reliability of these publically available climate products corresponding to the elevations of stations discretized into several classes. According to the rank of climate products for each elevation class, we identified the most reliable climate products based on the elevation of target points. A web-based system

  18. On the solution of high order stable time integration methods

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Blaheta, Radim; Sysala, Stanislav; Ahmad, B.

    2013-01-01

    Roč. 108, č. 1 (2013), s. 1-22 ISSN 1687-2770 Institutional support: RVO:68145535 Keywords : evolution equations * preconditioners for quadratic matrix polynomials * a stiffly stable time integration method Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2013 http://www.boundaryvalueproblems.com/content/2013/1/108

  19. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  20. Reliability Study Regarding the Use of Histogram Similarity Methods for Damage Detection

    Directory of Open Access Journals (Sweden)

    Nicoleta Gillich

    2013-01-01

    Full Text Available The paper analyses the reliability of three dissimilarity estimators to compare histograms, as support for a frequency-based damage detection method, able to identify structural changes in beam-like structures. First a brief presentation of the own developed damage detection method is made, with focus on damage localization. It consists actually in comparing a histogram derived from measurement results, with a large series of histograms, namely the damage location indexes for all locations along the beam, obtained by calculus. We tested some dissimilarity estimators like the Minkowski-form Distances, the Kullback-Leibler Divergence and the Histogram Intersection and found the Minkowski Distance as the method providing best results. It was tested for numerous locations, using real measurement results and with results artificially debased by noise, proving its reliability.

  1. A rapid reliability estimation method for directed acyclic lifeline networks with statistically dependent components

    International Nuclear Information System (INIS)

    Kang, Won-Hee; Kliese, Alyce

    2014-01-01

    Lifeline networks, such as transportation, water supply, sewers, telecommunications, and electrical and gas networks, are essential elements for the economic and societal functions of urban areas, but their components are highly susceptible to natural or man-made hazards. In this context, it is essential to provide effective pre-disaster hazard mitigation strategies and prompt post-disaster risk management efforts based on rapid system reliability assessment. This paper proposes a rapid reliability estimation method for node-pair connectivity analysis of lifeline networks especially when the network components are statistically correlated. Recursive procedures are proposed to compound all network nodes until they become a single super node representing the connectivity between the origin and destination nodes. The proposed method is applied to numerical network examples and benchmark interconnected power and water networks in Memphis, Shelby County. The connectivity analysis results show the proposed method's reasonable accuracy and remarkable efficiency as compared to the Monte Carlo simulations

  2. On High-Order Upwind Methods for Advection

    Science.gov (United States)

    Huynh, Hung T.

    2017-01-01

    Scheme III (piecewise linear) and V (piecewise parabolic) of Van Leer are shown to yield identical solutions provided the initial conditions are chosen in an appropriate manner. This result is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The result also shows a key connection between the approaches of discontinuous and continuous representations.

  3. A Sequential Kriging reliability analysis method with characteristics of adaptive sampling regions and parallelizability

    International Nuclear Information System (INIS)

    Wen, Zhixun; Pei, Haiqing; Liu, Hai; Yue, Zhufeng

    2016-01-01

    The sequential Kriging reliability analysis (SKRA) method has been developed in recent years for nonlinear implicit response functions which are expensive to evaluate. This type of method includes EGRA: the efficient reliability analysis method, and AK-MCS: the active learning reliability method combining Kriging model and Monte Carlo simulation. The purpose of this paper is to improve SKRA by adaptive sampling regions and parallelizability. The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results. The size of the sampling regions is adapted according to the failure probability calculated by last iteration. Two parallel strategies are introduced and compared, aimed at selecting multiple sample points at a time. The improvement is verified through several troublesome examples. - Highlights: • The ISKRA method improves the efficiency of SKRA. • Adaptive sampling regions strategy reduces the number of needed samples. • The two parallel strategies reduce the number of needed iterations. • The accuracy of the optimal value impacts the number of samples significantly.

  4. Reliability of an experimental method to analyse the impact point on a golf ball during putting.

    Science.gov (United States)

    Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn

    2015-06-01

    This study aimed to examine the reliability of an experimental method identifying the location of the impact point on a golf ball during putting. Forty trials were completed using a mechanical putting robot set to reproduce a putt of 3.2 m, with four different putter-ball combinations. After locating the centre of the dimple pattern (centroid) the following variables were tested; distance of the impact point from the centroid, angle of the impact point from the centroid and distance of the impact point from the centroid derived from the X, Y coordinates. Good to excellent reliability was demonstrated in all impact variables reflected in very strong relative (ICC = 0.98-1.00) and absolute reliability (SEM% = 0.9-4.3%). The highest SEM% observed was 7% for the angle of the impact point from the centroid. In conclusion, the experimental method was shown to be reliable at locating the centroid location of a golf ball, therefore allowing for the identification of the point of impact with the putter head and is suitable for use in subsequent studies.

  5. Exploratory factor analysis and reliability analysis with missing data: A simple method for SPSS users

    Directory of Open Access Journals (Sweden)

    Bruce Weaver

    2014-09-01

    Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.

  6. Investigation on the reliability of expansion joint for piping with probabilistic method

    International Nuclear Information System (INIS)

    Ishii, Y.; Kambe, M.

    1980-01-01

    The reduction of the plant size is necessitated as one of the major targets in LMFBR design. Usually, piping work system is extensively used to absorb thermal expansion between two components anywhere. Besides above, expansion joint for piping seems to be attractive lately for the same object. This paper describes the significance of expansion joint with multiple boundaries, breakdown probability of expansion joint assembly and partly the bellows by introducing several hypothetical conditions in connection with piping. Also, an importance of in-service inspection (ISI) for expansion joint was discussed using a comparative table and probabilities on reliability from partly broken to full penetration. In conclusion, the expansion joint with ISI should be manufactured with excellent reliability in order to cope with piping work system; several conditions of the practical application for piping systems are suggested. (author)

  7. Investigation on the reliability of expansion joint for piping with probabilistic method

    Energy Technology Data Exchange (ETDEWEB)

    Ishii, Y; Kambe, M

    1980-02-01

    The reduction of the plant size is necessitated as one of the major targets in LMFBR design. Usually, piping work system is extensively used to absorb thermal expansion between two components anywhere. Besides above, expansion joint for piping seems to be attractive lately for the same object. This paper describes the significance of expansion joint with multiple boundaries, breakdown probability of expansion joint assembly and partly the bellows by introducing several hypothetical conditions in connection with piping. Also, an importance of in-service inspection (ISI) for expansion joint was discussed using a comparative table and probabilities on reliability from partly broken to full penetration. In conclusion, the expansion joint with ISI should be manufactured with excellent reliability in order to cope with piping work system; several conditions of the practical application for piping systems are suggested. (author)

  8. Investigation on the reliability of expansion joint for piping with probabilistic method

    International Nuclear Information System (INIS)

    Ishii, Yoichiro; Kambe, Mitsuru.

    1979-11-01

    The reduction of the plant size if necessitated as one of the major target in LMFBR design. Usually, piping work system is extensively used to absorb thermal expansion between two components anywhere. Besides above, expansion joint for piping seems to be attractive lately for the same object. This paper describes about the significance of expansion joint with multiple boundaries, breakdown probability of expansion joint assembly and partly the bellows by introducing several hypothetical conditions in connection with piping. Also, an importance of inservice inspection (ISI) for expansion joint was discussed using by comparative table and probabilities on reliability from partly broken to full penetration. In the conclusion, the expansion joint with ISI should be manufactured with excellent reliability in order to cope with piping work system, and several conditions of the practical application for piping systems are suggested. (author)

  9. Reliability of fitness tests using methods and time periods common in sport and occupational management.

    Science.gov (United States)

    Burnstein, Bryan D; Steele, Russell J; Shrier, Ian

    2011-01-01

    Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Cohort study. Eighteen different Cirque du Soleil shows. Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators.

  10. Feasibility to implement the radioisotopic method of nasal mucociliary transport measurement getting reliable results

    International Nuclear Information System (INIS)

    Troncoso, M.; Opazo, C.; Quilodran, C.; Lizama, V.

    2002-01-01

    Aim: Our goal was to implement the radioisotopic method to measure the nasal mucociliary velocity of transport (NMVT) in a feasible way in order to make it easily available as well as to validate the accuracy of the results. Such a method is needed when primary ciliary dyskinesia (PCD) is suspected, a disorder characterized for low NMVT, non-specific chronic respiratory symptoms that needs to be confirmed by electronic microscopic cilia biopsy. Methods: We performed one hundred studies from February 2000 until February 2002. Patients aged 2 months to 39 years, mean 9 years. All of them were referred from the Respiratory Disease Department. Ninety had upper or lower respiratory symptoms, ten were healthy controls. The procedure, done be the Nuclear Medicine Technologist, consists to put a 20 μl drop of 99mTc-MAA (0,1 mCi, 4 MBq) behind the head of the inferior turbinate in one nostril using a frontal light, a nasal speculum and a teflon catheter attached to a tuberculin syringe. The drop movement was acquired in a gamma camera-computer system and the velocity was expressed in mm/min. As there is need for the patient not to move during the procedure, sedation has to be used in non-cooperative children. Abnormal NMVT values cases were referred for nasal biopsy. Patients were classified in three groups. Normal controls (NC), PCD confirmed by biopsy (PCDB) and cases with respiratory symptoms without biopsy (RSNB). In all patients with NMVT less than 2.4 mm/min PCD was confirmed by biopsy. There was a clear-cut separation between normal and abnormal values and interestingly even the highest NMVT in PCDB cases was lower than the lowest NMVT in NC. The procedure is not as easy as is generally described in the literature because the operator has to get some skill as well as for the need of sedation in some cases. Conclusion: The procedure gives reliable, reproducible and objective results. It is safe, not expensive and quick in cooperative patients. Although, sometimes

  11. Network reliability analysis of complex systems using a non-simulation-based method

    International Nuclear Information System (INIS)

    Kim, Youngsuk; Kang, Won-Hee

    2013-01-01

    Civil infrastructures such as transportation, water supply, sewers, telecommunications, and electrical and gas networks often establish highly complex networks, due to their multiple source and distribution nodes, complex topology, and functional interdependence between network components. To understand the reliability of such complex network system under catastrophic events such as earthquakes and to provide proper emergency management actions under such situation, efficient and accurate reliability analysis methods are necessary. In this paper, a non-simulation-based network reliability analysis method is developed based on the Recursive Decomposition Algorithm (RDA) for risk assessment of generic networks whose operation is defined by the connections of multiple initial and terminal node pairs. The proposed method has two separate decomposition processes for two logical functions, intersection and union, and combinations of these processes are used for the decomposition of any general system event with multiple node pairs. The proposed method is illustrated through numerical network examples with a variety of system definitions, and is applied to a benchmark gas transmission pipe network in Memphis TN to estimate the seismic performance and functional degradation of the network under a set of earthquake scenarios.

  12. Coupling finite elements and reliability methods - application to safety evaluation of pressurized water reactor vessels

    International Nuclear Information System (INIS)

    Pitner, P.; Venturini, V.

    1995-02-01

    When reliability studies are extended form deterministic calculations in mechanics, it is necessary to take into account input parameters variabilities which are linked to the different sources of uncertainty. Integrals must then be calculated to evaluate the failure risk. This can be performed either by simulation methods, or by approximations ones (FORM/SORM). Model in mechanics often require to perform calculation codes. These ones must then be coupled with the reliability calculations. Theses codes can involve large calculation times when they are invoked numerous times during simulations sequences or in complex iterative procedures. Response surface method gives an approximation of the real response from a reduced number of points for which the finite element code is run. Thus, when it is combined with FORM/SORM methods, a coupling can be carried out which gives results in a reasonable calculation time. An application of response surface method to mechanics reliability coupling for a mechanical model which calls for a finite element code is presented. It corresponds to a probabilistic fracture mechanics study of a pressurized water reactor vessel. (authors). 5 refs., 3 figs

  13. A fracture mechanics and reliability based method to assess non-destructive testings for pressure vessels

    International Nuclear Information System (INIS)

    Kitagawa, Hideo; Hisada, Toshiaki

    1979-01-01

    Quantitative evaluation has not been made on the effects of carrying out preservice and in-service nondestructive tests for securing the soundness, safety and maintainability of pressure vessels, spending large expenses and labor. Especially the problems concerning the time and interval of in-service inspections lack the reasonable, quantitative evaluation method. In this paper, the problems of pressure vessels are treated by having developed the analysis method based on reliability technology and probability theory. The growth of surface cracks in pressure vessels was estimated, using the results of previous studies. The effects of nondestructive inspection on the defects in pressure vessels were evaluated, and the influences of many factors, such as plate thickness, stress, the accuracy of inspection and so on, on the effects of inspection, and the method of evaluating the inspections at unequal intervals were investigated. The analysis of reliability taking in-service inspection into consideration, the evaluation of in-service inspection and other affecting factors through the typical examples of analysis, and the review concerning the time of inspection are described. The method of analyzing the reliability of pressure vessels, considering the growth of defects and preservice and in-service nondestructive tests, was able to be systematized so as to be practically usable. (Kako, I.)

  14. PROOF OF CONCEPT FOR A HUMAN RELIABILITY ANALYSIS METHOD FOR HEURISTIC USABILITY EVALUATION OF SOFTWARE

    International Nuclear Information System (INIS)

    Ronald L. Boring; David I. Gertman; Jeffrey C. Joe; Julie L. Marble

    2005-01-01

    An ongoing issue within human-computer interaction (HCI) is the need for simplified or ''discount'' methods. The current economic slowdown has necessitated innovative methods that are results driven and cost effective. The myriad methods of design and usability are currently being cost-justified, and new techniques are actively being explored that meet current budgets and needs. Recent efforts in human reliability analysis (HRA) are highlighted by the ten-year development of the Standardized Plant Analysis Risk HRA (SPAR-H) method. The SPAR-H method has been used primarily for determining human centered risk at nuclear power plants. The SPAR-H method, however, shares task analysis underpinnings with HCI. Despite this methodological overlap, there is currently no HRA approach deployed in heuristic usability evaluation. This paper presents an extension of the existing SPAR-H method to be used as part of heuristic usability evaluation in HCI

  15. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography

    DEFF Research Database (Denmark)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin

    2014-01-01

    This paper describes improvement and comparison of analytical methods for simultaneous determination of trace-level plutonium and neptunium in urine samples by inductively coupled plasma mass spectrometry (ICP-MS). Four sample pre-concentration techniques, including calcium phosphate, iron......), it endows urinalysis methods with better reliability and repeatability compared with co-precipitation techniques. In view of the applicability of different pre-concentration techniques proposed previously in the literature, the main challenge behind relevant method development is pointed to be the release...

  16. Reliability of different methods used for forming of working samples in the laboratory for seed testing

    Directory of Open Access Journals (Sweden)

    Opra Branislava

    2000-01-01

    Full Text Available The testing of seed quality starts from the moment a sample is formed in a warehouse during processing or packaging of the seed. The seed sampling as the process of obtaining the working sample also assumes each step undertaken during its testing in the laboratory. With the aim of appropriate forming of a seed sample in the laboratory, the usage of seed divider is prescribed for large seeded species (such as seed the size of wheat or larger (ISTA Rules, 1999. The aim of this paper was the comparison of different methods used for obtaining the working samples of maize and wheat seeds using conical, soil and centrifugal dividers. The number of seed of added admixtures confirmed the reliability of working samples formation. To each maize sample (1000 g 10 seeds of the following admixtures were added: Zea mays L. (red pericarp, Hordeum vulgäre L., Triticum aestivum L., and Glycine max (L. Merr. Two methods were used for formation of maze seed working sample. To wheat samples (1000 g 10 seeds of each of the following species were added: Avena saliva (hulled seeds, Hordeum vulgäre L., Galium tricorne Stokes, and Polygonum lapatifolmm L. For formation of wheat seed working samples four methods were used. Optimum of 9, but not less than 7 seeds of admixture were due to be determined in the maize seed working sample, while for wheat, at least one seed of admixture was expected to be found in the working sample. The obtained results confirmed that the formation of the maize seed working samples was the most reliable when centrifugal divider, the first method was used (average of admixture - 9.37. From the observed admixtures the seed of Triticum aestivum L. was the most uniformly distributed, the first method also being used (6.93. The second method gains high average values satisfying the given criterion, but it should be used with previous homogenization of the sample being tested. The forming of wheat seed working samples is the most reliable if the

  17. Development of an analysis rule of diagnosis error for standard method of human reliability analysis

    International Nuclear Information System (INIS)

    Jeong, W. D.; Kang, D. I.; Jeong, K. S.

    2003-01-01

    This paper presents the status of development of Korea standard method for Human Reliability Analysis (HRA), and proposed a standard procedure and rules for the evaluation of diagnosis error probability. The quality of KSNP HRA was evaluated using the requirement of ASME PRA standard guideline, and the design requirement for the standard HRA method was defined. Analysis procedure and rules, developed so far, to analyze diagnosis error probability was suggested as a part of the standard method. And also a study of comprehensive application was performed to evaluate the suitability of the proposed rules

  18. Review on Laryngeal Palpation Methods in Muscle Tension Dysphonia: Validity and Reliability Issues.

    Science.gov (United States)

    Khoddami, Seyyedeh Maryam; Ansari, Noureddin Nakhostin; Jalaie, Shohreh

    2015-07-01

    Laryngeal palpation is a common clinical method for the assessment of neck and laryngeal muscles in muscle tension dysphonia (MTD). To review the available laryngeal palpation methods used in patients with MTD for the assessment, diagnosis, or document of treatment outcomes. A systematic review of the literature concerning palpatory methods in MTD was conducted using the databases MEDLINE (PubMed), ScienceDirect, Scopus, Web of science, Web of knowledge and Cochrane Library between July and October 2013. Relevant studies were identified by one reviewer based on screened titles/abstracts and full texts. Manual searching was also used to track the source literature. There were five main as well as miscellaneous palpation methods that were different according to target anatomical structures, judgment or grading system, and using tasks. There were only a few scales available, and the majority of the palpatory methods were qualitative. Most of the palpatory methods evaluate the tension at both static and dynamic tasks. There was little information about the validity and reliability of the available methods. The literature on the scientific evidence of muscle tension indicators perceived by laryngeal palpation in MTD is scarce. Future studies should be conducted to investigate the validity and reliability of palpation methods. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  19. DEPEND-HRA-A method for consideration of dependency in human reliability analysis

    International Nuclear Information System (INIS)

    Cepin, Marko

    2008-01-01

    A consideration of dependencies between human actions is an important issue within the human reliability analysis. A method was developed, which integrates the features of existing methods and the experience from a full scope plant simulator. The method is used on real plant-specific human reliability analysis as a part of the probabilistic safety assessment of a nuclear power plant. The method distinguishes dependency for pre-initiator events from dependency for initiator and post-initiator events. The method identifies dependencies based on scenarios, where consecutive human actions are modeled, and based on a list of minimal cut sets, which is obtained by running the minimal cut set analysis considering high values of human error probabilities in the evaluation. A large example study, which consisted of a large number of human failure events, demonstrated the applicability of the method. Comparative analyses that were performed show that both selection of dependency method and selection of dependency levels within the method largely impact the results of probabilistic safety assessment. If the core damage frequency is not impacted much, the listings of important basic events in terms of risk increase and risk decrease factors may change considerably. More efforts are needed on the subject, which will prepare the background for more detailed guidelines, which will remove the subjectivity from the evaluations as much as it is possible

  20. Method of administration of PROMIS scales did not significantly impact score level, reliability, or validity

    DEFF Research Database (Denmark)

    Bjorner, Jakob B; Rose, Matthias; Gandek, Barbara

    2014-01-01

    OBJECTIVES: To test the impact of the method of administration (MOA) on score level, reliability, and validity of scales developed in the Patient Reported Outcomes Measurement Information System (PROMIS). STUDY DESIGN AND SETTING: Two nonoverlapping parallel forms each containing eight items from......, no significant mode differences were found and all confidence intervals were within the prespecified minimal important difference of 0.2 standard deviation. Parallel-forms reliabilities were very high (ICC = 0.85-0.93). Only one across-mode ICC was significantly lower than the same-mode ICC. Tests of validity...... questionnaire (PQ), personal digital assistant (PDA), or personal computer (PC) and a second form by PC, in the same administration. Method equivalence was evaluated through analyses of difference scores, intraclass correlations (ICCs), and convergent/discriminant validity. RESULTS: In difference score analyses...

  1. Establishing survey validity and reliability for American Indians through "think aloud" and test-retest methods.

    Science.gov (United States)

    Hauge, Cindy Horst; Jacobs-Knight, Jacque; Jensen, Jamie L; Burgess, Katherine M; Puumala, Susan E; Wilton, Georgiana; Hanson, Jessica D

    2015-06-01

    The purpose of this study was to use a mixed-methods approach to determine the validity and reliability of measurements used within an alcohol-exposed pregnancy prevention program for American Indian women. To develop validity, content experts provided input into the survey measures, and a "think aloud" methodology was conducted with 23 American Indian women. After revising the measurements based on this input, a test-retest was conducted with 79 American Indian women who were randomized to complete either the original measurements or the new, modified measurements. The test-retest revealed that some of the questions performed better for the modified version, whereas others appeared to be more reliable for the original version. The mixed-methods approach was a useful methodology for gathering feedback on survey measurements from American Indian participants and in indicating specific survey questions that needed to be modified for this population. © The Author(s) 2015.

  2. Reliability and Sensitivity Analysis for Laminated Composite Plate Using Response Surface Method

    International Nuclear Information System (INIS)

    Lee, Seokje; Kim, Ingul; Jang, Moonho; Kim, Jaeki; Moon, Jungwon

    2013-01-01

    Advanced fiber-reinforced laminated composites are widely used in various fields of engineering to reduce weight. The material property of each ply is well known; specifically, it is known that ply is less reliable than metallic materials and very sensitive to the loading direction. Therefore, it is important to consider this uncertainty in the design of laminated composites. In this study, reliability analysis is conducted using Callosum and Meatball interactions for a laminated composite plate for the case in which the tip deflection is the design requirement and the material property is a random variable. Furthermore, the efficiency and accuracy of the approximation method is identified, and a probabilistic sensitivity analysis is conducted. As a result, we can prove the applicability of the advanced design method for the stabilizer of an underwater vehicle

  3. Reliability and Sensitivity Analysis for Laminated Composite Plate Using Response Surface Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seokje; Kim, Ingul [Chungnam National Univ., Daejeon (Korea, Republic of); Jang, Moonho; Kim, Jaeki; Moon, Jungwon [LIG Nex1, Yongin (Korea, Republic of)

    2013-04-15

    Advanced fiber-reinforced laminated composites are widely used in various fields of engineering to reduce weight. The material property of each ply is well known; specifically, it is known that ply is less reliable than metallic materials and very sensitive to the loading direction. Therefore, it is important to consider this uncertainty in the design of laminated composites. In this study, reliability analysis is conducted using Callosum and Meatball interactions for a laminated composite plate for the case in which the tip deflection is the design requirement and the material property is a random variable. Furthermore, the efficiency and accuracy of the approximation method is identified, and a probabilistic sensitivity analysis is conducted. As a result, we can prove the applicability of the advanced design method for the stabilizer of an underwater vehicle.

  4. Modulating functions method for parameters estimation in the fifth order KdV equation

    KAUST Repository

    Asiri, Sharefa M.; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a

  5. INNOVATIVE METHODS TO EVALUATE THE RELIABILITY OF INFORMATION CONSOLIDATED FINANCIAL STATEMENTS

    Directory of Open Access Journals (Sweden)

    Irina P. Kurochkina

    2014-01-01

    Full Text Available The article explores the possibility of using foreign innovative methods to assess the reliabilityof information consolidated fi nancial statements of Russian companies. Recommendations aremade under their adaptation and applicationinto commercial organizations. Banish methodindicators are implemented in one of the world’s largest vertically integrated steel and miningcompanies. Audit firms are proposed to usemethods of assessing the reliability of information in the practical application of ISA.

  6. Use of simulation methods in the evaluation of reliability and availability of complex system

    International Nuclear Information System (INIS)

    Maigret, N.; Duchemin, B.; Robert, T.; Villeneuve, J.J. de; Lanore, J.M.

    1982-04-01

    After a short review of the available standard methods in the reliability field like Boolean algebra for fault tree and the semi-regeneration theory for Markov, this paper shows how the BIAF code based on state description of a system and simulation techique can solve many problems. It also shows how the use of importance sampling and biasing techniques allows us to deal with the rare event problem

  7. A new method for evaluating the availability, reliability, and maintainability whatever may be the probability law

    International Nuclear Information System (INIS)

    Doyon, L.R.; CEA Centre d'Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette

    1975-01-01

    A simple method is presented for computer solving every system model (availability, reliability, and maintenance) with intervals between failures, and time duration for repairs distributed according to any probability law, and for any maintainance policy. A matrix equation is obtained using Markov diagrams. An example is given with the solution by the APAFS program (Algorithme Pour l'Analyse de la Fiabilite des Systemes) [fr

  8. Reliability design of a critical facility: An application of PRA methods

    International Nuclear Information System (INIS)

    Souza Vieira Neto, A.; Souza Borges, W. de

    1987-01-01

    Although a general agreement concerning the enforcement of reliability (probabilistic) design criteria for nuclear utilities is yet to be achieved. PRA methodology can still be used successfully as a project design and review tool, aimed at improving system's prospective performance or minimizing expected accident consequences. In this paper, the potential of such an application of PRA methods is examined in the special case of a critical design project currently being developed in Brazil. (orig.)

  9. Data collection on the unit control room simulator as a method of operator reliability analysis

    International Nuclear Information System (INIS)

    Holy, J.

    1998-01-01

    The report consists of the following chapters: (1) Probabilistic assessment of nuclear power plant operation safety and human factor reliability analysis; (2) Simulators and simulations as human reliability analysis tools; (3) DOE project for using the collection and analysis of data from the unit control room simulator in human factor reliability analysis at the Paks nuclear power plant; (4) General requirements for the organization of the simulator data collection project; (5) Full-scale simulator at the Nuclear Power Plants Research Institute in Trnava, Slovakia, used as a training means for operators of the Dukovany NPP; (6) Assessment of the feasibility of quantification of important human actions modelled within a PSA study by employing simulator data analysis; (7) Assessment of the feasibility of using the various exercise topics for the quantification of the PSA model; (8) Assessment of the feasibility of employing the simulator in the analysis of the individual factors affecting the operator's activity; and (9) Examples of application of statistical methods in the analysis of the human reliability factor. (P.A.)

  10. System principles, mathematical models and methods to ensure high reliability of safety systems

    Science.gov (United States)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  11. Decreasing inventory of a cement factory roller mill parts using reliability centered maintenance method

    Science.gov (United States)

    Witantyo; Rindiyah, Anita

    2018-03-01

    According to data from maintenance planning and control, it was obtained that highest inventory value is non-routine components. Maintenance components are components which procured based on maintenance activities. The problem happens because there is no synchronization between maintenance activities and the components required. Reliability Centered Maintenance method is used to overcome the problem by reevaluating maintenance activities required components. The case chosen is roller mill system because it has the highest unscheduled downtime record. Components required for each maintenance activities will be determined by its failure distribution, so the number of components needed could be predicted. Moreover, those components will be reclassified from routine component to be non-routine component, so the procurement could be carried out regularly. Based on the conducted analysis, failure happens in almost every maintenance task are classified to become scheduled on condition task, scheduled discard task, schedule restoration task and no schedule maintenance. From 87 used components for maintenance activities are evaluated and there 19 components that experience reclassification from non-routine components to routine components. Then the reliability and need of those components were calculated for one-year operation period. Based on this invention, it is suggested to change all of the components in overhaul activity to increase the reliability of roller mill system. Besides, the inventory system should follow maintenance schedule and the number of required components in maintenance activity so the value of procurement will be decreased and the reliability system will increase.

  12. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    Science.gov (United States)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  13. Reliability of Lyapunov characteristic exponents computed by the two-particle method

    Science.gov (United States)

    Mei, Lijie; Huang, Li

    2018-03-01

    For highly complex problems, such as the post-Newtonian formulation of compact binaries, the two-particle method may be a better, or even the only, choice to compute the Lyapunov characteristic exponent (LCE). This method avoids the complex calculations of variational equations compared with the variational method. However, the two-particle method sometimes provides spurious estimates to LCEs. In this paper, we first analyze the equivalence in the definition of LCE between the variational and two-particle methods for Hamiltonian systems. Then, we develop a criterion to determine the reliability of LCEs computed by the two-particle method by considering the magnitude of the initial tangent (or separation) vector ξ0 (or δ0), renormalization time interval τ, machine precision ε, and global truncation error ɛT. The reliable Lyapunov characteristic indicators estimated by the two-particle method form a V-shaped region, which is restricted by d0, ε, and ɛT. Finally, the numerical experiments with the Hénon-Heiles system, the spinning compact binaries, and the post-Newtonian circular restricted three-body problem strongly support the theoretical results.

  14. An enquiry into the method of paired comparison: reliability, scaling, and Thurstone's Law of Comparative Judgment

    Science.gov (United States)

    Thomas C. Brown; George L. Peterson

    2009-01-01

    The method of paired comparisons is used to measure individuals' preference orderings of items presented to them as discrete binary choices. This paper reviews the theory and application of the paired comparison method, describes a new computer program available for eliciting the choices, and presents an analysis of methods for scaling paired choice data to...

  15. A dynamic particle filter-support vector regression method for reliability prediction

    International Nuclear Information System (INIS)

    Wei, Zhao; Tao, Tao; ZhuoShu, Ding; Zio, Enrico

    2013-01-01

    Support vector regression (SVR) has been applied to time series prediction and some works have demonstrated the feasibility of its use to forecast system reliability. For accuracy of reliability forecasting, the selection of SVR's parameters is important. The existing research works on SVR's parameters selection divide the example dataset into training and test subsets, and tune the parameters on the training data. However, these fixed parameters can lead to poor prediction capabilities if the data of the test subset differ significantly from those of training. Differently, the novel method proposed in this paper uses particle filtering to estimate the SVR model parameters according to the whole measurement sequence up to the last observation instance. By treating the SVR training model as the observation equation of a particle filter, our method allows updating the SVR model parameters dynamically when a new observation comes. Because of the adaptability of the parameters to dynamic data pattern, the new PF–SVR method has superior prediction performance over that of standard SVR. Four application results show that PF–SVR is more robust than SVR to the decrease of the number of training data and the change of initial SVR parameter values. Also, even if there are trends in the test data different from those in the training data, the method can capture the changes, correct the SVR parameters and obtain good predictions. -- Highlights: •A dynamic PF–SVR method is proposed to predict the system reliability. •The method can adjust the SVR parameters according to the change of data. •The method is robust to the size of training data and initial parameter values. •Some cases based on both artificial and real data are studied. •PF–SVR shows superior prediction performance over standard SVR

  16. High-Order Curvilinear Finite Element Methods for Lagrangian Hydrodynamics [High Order Curvilinear Finite Elements for Lagrangian Hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dobrev, Veselin A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kolev, Tzanio V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rieben, Robert N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-09-20

    The numerical approximation of the Euler equations of gas dynamics in a movingLagrangian frame is at the heart of many multiphysics simulation algorithms. Here, we present a general framework for high-order Lagrangian discretization of these compressible shock hydrodynamics equations using curvilinear finite elements. This method is an extension of the approach outlined in [Dobrev et al., Internat. J. Numer. Methods Fluids, 65 (2010), pp. 1295--1310] and can be formulated for any finite dimensional approximation of the kinematic and thermodynamic fields, including generic finite elements on two- and three-dimensional meshes with triangular, quadrilateral, tetrahedral, or hexahedral zones. We discretize the kinematic variables of position and velocity using a continuous high-order basis function expansion of arbitrary polynomial degree which is obtained via a corresponding high-order parametric mapping from a standard reference element. This enables the use of curvilinear zone geometry, higher-order approximations for fields within a zone, and a pointwise definition of mass conservation which we refer to as strong mass conservation. Moreover, we discretize the internal energy using a piecewise discontinuous high-order basis function expansion which is also of arbitrary polynomial degree. This facilitates multimaterial hydrodynamics by treating material properties, such as equations of state and constitutive models, as piecewise discontinuous functions which vary within a zone. To satisfy the Rankine--Hugoniot jump conditions at a shock boundary and generate the appropriate entropy, we introduce a general tensor artificial viscosity which takes advantage of the high-order kinematic and thermodynamic information available in each zone. Finally, we apply a generic high-order time discretization process to the semidiscrete equations to develop the fully discrete numerical algorithm. Our method can be viewed as the high-order generalization of the so-called staggered

  17. Reliability and validity of a brief method to assess nociceptive flexion reflex (NFR) threshold.

    Science.gov (United States)

    Rhudy, Jamie L; France, Christopher R

    2011-07-01

    The nociceptive flexion reflex (NFR) is a physiological tool to study spinal nociception. However, NFR assessment can take several minutes and expose participants to repeated suprathreshold stimulations. The 4 studies reported here assessed the reliability and validity of a brief method to assess NFR threshold that uses a single ascending series of stimulations (Peak 1 NFR), by comparing it to a well-validated method that uses 3 ascending/descending staircases of stimulations (Staircase NFR). Correlations between the NFR definitions were high, were on par with test-retest correlations of Staircase NFR, and were not affected by participant sex or chronic pain status. Results also indicated the test-retest reliabilities for the 2 definitions were similar. Using larger stimulus increments (4 mAs) to assess Peak 1 NFR tended to result in higher NFR threshold estimates than using the Staircase NFR definition, whereas smaller stimulus increments (2 mAs) tended to result in lower NFR threshold estimates than the Staircase NFR definition. Neither NFR definition was correlated with anxiety, pain catastrophizing, or anxiety sensitivity. In sum, a single ascending series of electrical stimulations results in a reliable and valid estimate of NFR threshold. However, caution may be warranted when comparing NFR thresholds across studies that differ in the ascending stimulus increments. This brief method to assess NFR threshold is reliable and valid; therefore, it should be useful to clinical pain researchers interested in quickly assessing inter- and intra-individual differences in spinal nociceptive processes. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  18. Development of A Standard Method for Human Reliability Analysis (HRA) of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kang, Dae Il; Jung, Won Dea; Kim, Jae Whan

    2005-12-01

    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME and ANS PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME and ANS HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method

  19. Development of A Standard Method for Human Reliability Analysis of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Jung, Won Dea; Kang, Dae Il; Kim, Jae Whan

    2005-12-01

    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method

  20. Development of A Standard Method for Human Reliability Analysis of Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Won Dea; Kang, Dae Il; Kim, Jae Whan

    2005-12-15

    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method.

  1. Development of A Standard Method for Human Reliability Analysis (HRA) of Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dae Il; Jung, Won Dea; Kim, Jae Whan

    2005-12-15

    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME and ANS PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME and ANS HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method.

  2. ImageJ: A Free, Easy, and Reliable Method to Measure Leg Ulcers Using Digital Pictures.

    Science.gov (United States)

    Aragón-Sánchez, Javier; Quintana-Marrero, Yurena; Aragón-Hernández, Cristina; Hernández-Herero, María José

    2017-12-01

    Wound measurement to document the healing course of chronic leg ulcers has an important role in the management of these patients. Digital cameras in smartphones are readily available and easy to use, and taking pictures of wounds is becoming a routine in specialized departments. Analyzing digital pictures with appropriate software provides clinicians a quick, clean, and easy-to-use tool for measuring wound area. A set of 25 digital pictures of plain foot and leg ulcers was the basis of this study. Photographs were taken placing a ruler next to the wound in parallel with the healthy skin with the iPhone 6S (Apple Inc, Cupertino, CA), which has a camera of 12 megapixels using the flash. The digital photographs were visualized with ImageJ 1.45s freeware (National Institutes of Health, Rockville, MD; http://imagej.net/ImageJ ). Wound area measurement was carried out by 4 raters: head of the department, wound care nurse, physician, and medical student. We assessed intra- and interrater reliability using the interclass correlation coefficient. To determine intraobserver reliability, 2 of the raters repeated the measurement of the set 1 week after the first reading. The interrater model displayed an interclass correlation coefficient of 0.99 with 95% confidence interval of 0.999 to 1.000, showing excellent reliability. The intrarater model of both examiners showed excellent reliability. In conclusion, analyzing digital images of leg ulcers with ImageJ estimates wound area with excellent reliability. This method provides a free, rapid, and accurate way to measure wounds and could routinely be used to document wound healing in daily clinical practice.

  3. Reliability and Validity of 3 Methods of Assessing Orthopedic Resident Skill in Shoulder Surgery.

    Science.gov (United States)

    Bernard, Johnathan A; Dattilo, Jonathan R; Srikumaran, Uma; Zikria, Bashir A; Jain, Amit; LaPorte, Dawn M

    Traditional measures for evaluating resident surgical technical skills (e.g., case logs) assess operative volume but not level of surgical proficiency. Our goal was to compare the reliability and validity of 3 tools for measuring surgical skill among orthopedic residents when performing 3 open surgical approaches to the shoulder. A total of 23 residents at different stages of their surgical training were tested for technical skill pertaining to 3 shoulder surgical approaches using the following measures: Objective Structured Assessment of Technical Skills (OSATS) checklists, the Global Rating Scale (GRS), and a final pass/fail assessment determined by 3 upper extremity surgeons. Adverse events were recorded. The Cronbach α coefficient was used to assess reliability of the OSATS checklists and GRS scores. Interrater reliability was calculated with intraclass correlation coefficients. Correlations among OSATS checklist scores, GRS scores, and pass/fail assessment were calculated with Spearman ρ. Validity of OSATS checklists was determined using analysis of variance with postgraduate year (PGY) as a between-subjects factor. Significance was set at p shoulder approaches. Checklist scores showed superior interrater reliability compared with GRS and subjective pass/fail measurements. GRS scores were positively correlated across training years. The incidence of adverse events was significantly higher among PGY-1 and PGY-2 residents compared with more experienced residents. OSATS checklists are a valid and reliable assessment of technical skills across 3 surgical shoulder approaches. However, checklist scores do not measure quality of technique. Documenting adverse events is necessary to assess quality of technique and ultimate pass/fail status. Multiple methods of assessing surgical skill should be considered when evaluating orthopedic resident surgical performance. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights

  4. Simple, reliable, and nondestructive method for the measurement of vacuum pressure without specialized equipment.

    Science.gov (United States)

    Yuan, Jin-Peng; Ji, Zhong-Hua; Zhao, Yan-Ting; Chang, Xue-Fang; Xiao, Lian-Tuan; Jia, Suo-Tang

    2013-09-01

    We present a simple, reliable, and nondestructive method for the measurement of vacuum pressure in a magneto-optical trap. The vacuum pressure is verified to be proportional to the collision rate constant between cold atoms and the background gas with a coefficient k, which can be calculated by means of the simple ideal gas law. The rate constant for loss due to collisions with all background gases can be derived from the total collision loss rate by a series of loading curves of cold atoms under different trapping laser intensities. The presented method is also applicable for other cold atomic systems and meets the miniaturization requirement of commercial applications.

  5. Features of an advanced human reliability analysis method, AGAPE-ET

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Jung, Won Dea; Park, Jin Kyun

    2005-01-01

    This paper presents the main features of an advanced human reliability analysis (HRA) method, AGAPE-ET. It has the capabilities to deal with the diagnosis failures and the errors of commission (EOC), which have not been normally treated in the conventional HRAs. For the analysis of the potential for diagnosis failures, an analysis framework, which is called the misdiagnosis tree analysis (MDTA), and a taxonomy of the misdiagnosis causes with appropriate quantification schemes are provided. For the identification of the EOC events from the misdiagnosis, some procedural guidance is given. An example of the application of the method is also provided

  6. Features of an advanced human reliability analysis method, AGAPE-ET

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Jung, Won Dea; Park, Jin Kyun [Korea Atomic Energy Research Institute, Taejeon (Korea, Republic of)

    2005-11-15

    This paper presents the main features of an advanced human reliability analysis (HRA) method, AGAPE-ET. It has the capabilities to deal with the diagnosis failures and the errors of commission (EOC), which have not been normally treated in the conventional HRAs. For the analysis of the potential for diagnosis failures, an analysis framework, which is called the misdiagnosis tree analysis (MDTA), and a taxonomy of the misdiagnosis causes with appropriate quantification schemes are provided. For the identification of the EOC events from the misdiagnosis, some procedural guidance is given. An example of the application of the method is also provided.

  7. Between-day reliability of a method for non-invasive estimation of muscle composition.

    Science.gov (United States)

    Simunič, Boštjan

    2012-08-01

    Tensiomyography is a method for valid and non-invasive estimation of skeletal muscle fibre type composition. The validity of selected temporal tensiomyographic measures has been well established recently; there is, however, no evidence regarding the method's between-day reliability. Therefore it is the aim of this paper to establish the between-day repeatability of tensiomyographic measures in three skeletal muscles. For three consecutive days, 10 healthy male volunteers (mean±SD: age 24.6 ± 3.0 years; height 177.9 ± 3.9 cm; weight 72.4 ± 5.2 kg) were examined in a supine position. Four temporal measures (delay, contraction, sustain, and half-relaxation time) and maximal amplitude were extracted from the displacement-time tensiomyogram. A reliability analysis was performed with calculations of bias, random error, coefficient of variation (CV), standard error of measurement, and intra-class correlation coefficient (ICC) with a 95% confidence interval. An analysis of ICC demonstrated excellent agreement (ICC were over 0.94 in 14 out of 15 tested parameters). However, lower CV was observed in half-relaxation time, presumably because of the specifics of the parameter definition itself. These data indicate that for the three muscles tested, tensiomyographic measurements were reproducible across consecutive test days. Furthermore, we indicated the most possible origin of the lowest reliability detected in half-relaxation time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Cervical vertebral maturation method and mandibular growth peak: a longitudinal study of diagnostic reliability.

    Science.gov (United States)

    Perinetti, Giuseppe; Primozic, Jasmina; Sharma, Bhavna; Cioffi, Iacopo; Contardo, Luca

    2018-03-28

    The capability of the cervical vertebral maturation (CVM) method in the identification of the mandibular growth peak on an individual basis remains undetermined. The diagnostic reliability of the six-stage CVM method in the identification of the mandibular growth peak was thus investigated. From the files of the Oregon and Burlington Growth Studies (data obtained between early 1950s and middle 1970s), 50 subjects (26 females, 24 males) with at least seven annual lateral cephalograms taken from 9 to 16 years were identified. Cervical vertebral maturation was assessed according to the CVM code staging system, and mandibular growth was defined as annual increments in Co-Gn distance. A diagnostic reliability analysis was carried out to establish the capability of the circumpubertal CVM stages 2, 3, and 4 in the identification of the imminent mandibular growth peak. Variable durations of each of the CVM stages 2, 3, and 4 were seen. The overall diagnostic accuracy values for the CVM stages 2, 3, and 4 were 0.70, 0.76, and 0.77, respectively. These low values appeared to be due to false positive cases. Secular trends in conjunction with the use of a discrete staging system. In most of the Burlington Growth Study sample, the lateral head film at age 15 was missing. None of the CVM stages 2, 3, and 4 reached a satisfactorily diagnostic reliability in the identification of imminent mandibular growth peak.

  9. Optimal design method for a digital human–computer interface based on human reliability in a nuclear power plant. Part 3: Optimization method for interface task layout

    International Nuclear Information System (INIS)

    Jiang, Jianjun; Wang, Yiqun; Zhang, Li; Xie, Tian; Li, Min; Peng, Yuyuan; Wu, Daqing; Li, Peiyao; Ma, Congmin; Shen, Mengxu; Wu, Xing; Weng, Mengyun; Wang, Shiwei; Xie, Cen

    2016-01-01

    Highlights: • The authors present an optimization algorithm for interface task layout. • The performing process of the proposed algorithm was depicted. • The performance evaluation method adopted neural network method. • The optimization layouts of an event interface tasks were obtained by experiments. - Abstract: This is the last in a series of papers describing the optimal design for a digital human–computer interface of a nuclear power plant (NPP) from three different points based on human reliability. The purpose of this series is to propose different optimization methods from varying perspectives to decrease human factor events that arise from the defects of a human–computer interface. The present paper mainly solves the optimization method as to how to effectively layout interface tasks into different screens. The purpose of this paper is to decrease human errors by reducing the distance that an operator moves among different screens in each operation. In order to resolve the problem, the authors propose an optimization process of interface task layout for digital human–computer interface of a NPP. As to how to automatically layout each interface task into one of screens in each operation, the paper presents a shortest moving path optimization algorithm with dynamic flag based on human reliability. To test the algorithm performance, the evaluation method uses neural network based on human reliability. The less the human error probabilities are, the better the interface task layouts among different screens are. Thus, by analyzing the performance of each interface task layout, the optimization result is obtained. Finally, the optimization layouts of spurious safety injection event interface tasks of the NPP are obtained by an experiment, the proposed methods has a good accuracy and stabilization.

  10. Machine Maintenance Scheduling with Reliability Engineering Method and Maintenance Value Stream Mapping

    Science.gov (United States)

    Sembiring, N.; Nasution, A. H.

    2018-02-01

    Corrective maintenance i.e replacing or repairing the machine component after machine break down always done in a manufacturing company. It causes the production process must be stopped. Production time will decrease due to the maintenance team must replace or repair the damage machine component. This paper proposes a preventive maintenance’s schedule for a critical component of a critical machine of an crude palm oil and kernel company due to increase maintenance efficiency. The Reliability Engineering & Maintenance Value Stream Mapping is used as a method and a tool to analize the reliability of the component and reduce the wastage in any process by segregating value added and non value added activities.

  11. A review of the evolution of human reliability analysis methods at nuclear industry

    International Nuclear Information System (INIS)

    Oliveira, Lécio N. de; Santos, Isaac José A. Luquetti dos; Carvalho, Paulo V.R.

    2017-01-01

    This paper reviews the status of researches on the application of human reliability analysis methods at nuclear industry and its evolution along the years. Human reliability analysis (HRA) is one of the elements used in Probabilistic Safety Analysis (PSA) and is performed as part of PSAs to quantify the likelihood that people will fail to take action, such as errors of omission and errors of commission. Although HRA may be used at lots of areas, the focus of this paper is to review the applicability of HRA methods along the years at nuclear industry, especially in Nuclear Power Plants (NPP). An electronic search on CAPES Portal of Journals (A bibliographic database) was performed. This literature review covers original papers published since the first generation of HRA methods until the ones published on March 2017. A total of 94 papers were retrieved by the initial search and 13 were selected to be fully reviewed and for data extraction after the application of inclusion and exclusion criteria, quality and suitability evaluation according to applicability at nuclear industry. Results point out that the methods from first generation are more used in practice than methods from second generation. This occurs because it is more concentrated towards quantification, in terms of success or failure of human action what make them useful for quantitative risk assessment to PSA. Although the second generation considers context and error of commission in human error prediction, they are not wider used in practice at nuclear industry to PSA. (author)

  12. A review of the evolution of human reliability analysis methods at nuclear industry

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Lécio N. de; Santos, Isaac José A. Luquetti dos; Carvalho, Paulo V.R., E-mail: lecionoliveira@gmail.com, E-mail: luquetti@ien.gov.br, E-mail: paulov@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-11-01

    This paper reviews the status of researches on the application of human reliability analysis methods at nuclear industry and its evolution along the years. Human reliability analysis (HRA) is one of the elements used in Probabilistic Safety Analysis (PSA) and is performed as part of PSAs to quantify the likelihood that people will fail to take action, such as errors of omission and errors of commission. Although HRA may be used at lots of areas, the focus of this paper is to review the applicability of HRA methods along the years at nuclear industry, especially in Nuclear Power Plants (NPP). An electronic search on CAPES Portal of Journals (A bibliographic database) was performed. This literature review covers original papers published since the first generation of HRA methods until the ones published on March 2017. A total of 94 papers were retrieved by the initial search and 13 were selected to be fully reviewed and for data extraction after the application of inclusion and exclusion criteria, quality and suitability evaluation according to applicability at nuclear industry. Results point out that the methods from first generation are more used in practice than methods from second generation. This occurs because it is more concentrated towards quantification, in terms of success or failure of human action what make them useful for quantitative risk assessment to PSA. Although the second generation considers context and error of commission in human error prediction, they are not wider used in practice at nuclear industry to PSA. (author)

  13. METHODS OF IMPROVING THE RELIABILITY OF THE CONTROL SYSTEM TRACTION POWER SUPPLY OF ELECTRIC TRANSPORT BASED ON AN EXPERT INFORMATION

    Directory of Open Access Journals (Sweden)

    O. O. Matusevych

    2009-03-01

    Full Text Available The author proposed the numerous methods of solving the multi-criterion task – increasing of reliability of control system on the basis of expert information. The information, which allows choosing thoughtfully the method of reliability increasing for a control system of electric transport, is considered.

  14. A fast and reliable readout method for quantitative analysis of surface-enhanced Raman scattering nanoprobes on chip surface

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Hyejin; Jeong, Sinyoung; Ko, Eunbyeol; Jeong, Dae Hong, E-mail: yslee@snu.ac.kr, E-mail: debobkr@gmail.com, E-mail: jeongdh@snu.ac.kr [Department of Chemistry Education, Seoul National University, Seoul 151-742 (Korea, Republic of); Kang, Homan [Interdisciplinary Program in Nano-Science and Technology, Seoul National University, Seoul 151-742 (Korea, Republic of); Lee, Yoon-Sik, E-mail: yslee@snu.ac.kr, E-mail: debobkr@gmail.com, E-mail: jeongdh@snu.ac.kr [Interdisciplinary Program in Nano-Science and Technology, Seoul National University, Seoul 151-742 (Korea, Republic of); School of Chemical and Biological Engineering, Seoul National University, Seoul 151-742 (Korea, Republic of); Lee, Ho-Young, E-mail: yslee@snu.ac.kr, E-mail: debobkr@gmail.com, E-mail: jeongdh@snu.ac.kr [Department of Nuclear Medicine, Seoul National University Bundang Hospital, Seongnam 463-707 (Korea, Republic of)

    2015-05-15

    Surface-enhanced Raman scattering techniques have been widely used for bioanalysis due to its high sensitivity and multiplex capacity. However, the point-scanning method using a micro-Raman system, which is the most common method in the literature, has a disadvantage of extremely long measurement time for on-chip immunoassay adopting a large chip area of approximately 1-mm scale and confocal beam point of ca. 1-μm size. Alternative methods such as sampled spot scan with high confocality and large-area scan method with enlarged field of view and low confocality have been utilized in order to minimize the measurement time practically. In this study, we analyzed the two methods in respect of signal-to-noise ratio and sampling-led signal fluctuations to obtain insights into a fast and reliable readout strategy. On this basis, we proposed a methodology for fast and reliable quantitative measurement of the whole chip area. The proposed method adopted a raster scan covering a full area of 100 μm × 100 μm region as a proof-of-concept experiment while accumulating signals in the CCD detector for single spectrum per frame. One single scan with 10 s over 100 μm × 100 μm area yielded much higher sensitivity compared to sampled spot scanning measurements and no signal fluctuations attributed to sampled spot scan. This readout method is able to serve as one of key technologies that will bring quantitative multiplexed detection and analysis into practice.

  15. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    exposures observed with error. However, compared with CEM, CGBS is easier to implement and has more desirable bias-reducing properties in the presence of substantial proportions of missing exposure data. Conclusion The CGBS approach could be useful for estimating exposure-disease association in semi-ecological studies when the true group means are ordered and the number of measured exposures in each group is small. These findings have important implication for cost-effective design of semi-ecological studies because they enable investigators to more reliably estimate exposure-disease associations with smaller exposure measurement campaign than with the analytical methods that were historically employed.

  16. BICLUSTERING METHODS FOR RE-ORDERING DATA MATRICES IN SYSTEMS BIOLOGY, DRUG DISCOVERY AND TOXICOLOGY

    Directory of Open Access Journals (Sweden)

    Christodoulos A. Floudas

    2010-12-01

    Full Text Available Biclustering has emerged as an important problem in the analysis of gene expression data since genes may only jointly respond over a subset of conditions. Many of the methods for biclustering, and clustering algorithms in general, utilize simplified models or heuristic strategies for identifying the ``best'' grouping of elements according to some metric and cluster definition and thus result in suboptimal clusters. In the first part of the presentation, we present a rigorous approach to biclustering, OREO, which is based on the Optimal RE-Ordering of the rows and columns of a data matrix so as to globally minimize the dissimilarity metric [1,2]. The physical permutations of the rows and columns of the data matrix can be modeled as either a network flow problem or a traveling salesman problem. The performance of OREO is tested on several important data matrices arising in systems biology to validate the ability of the proposed method and compare it to existing biclustering and clustering methods. In the second part of the talk, we will focus on novel methods for clustering of data matrices that are very sparse [3]. These types of data matrices arise in drug discovery where the x- and y-axis of a data matrix can correspond to different functional groups for two distinct substituent sites on a molecular scaffold. Each possible x and y pair corresponds to a single molecule which can be synthesized and tested for a certain property, such as percent inhibition of a protein function. For even moderate size matrices, synthesizing and testing a small fraction of the molecules is labor intensive and not economically feasible. Thus, it is of paramount importance to have a reliable method for guiding the synthesis process to select molecules that have a high probability of success. In the second part of the presentation, we introduce a new strategy to enable efficient substituent reordering and descriptor-free property estimation. Our approach casts

  17. Reliability of the input admittance of bowed-string instruments measured by the hammer method.

    Science.gov (United States)

    Zhang, Ailin; Woodhouse, Jim

    2014-12-01

    The input admittance at the bridge, measured by hammer testing, is often regarded as the most useful and convenient measurement of the vibrational behavior of a bowed string instrument. However, this method has been questioned, due especially to differences between human bowing and hammer impact. The goal of the research presented here is to investigate the reliability and accuracy of this classic hammer method. Experimental studies were carried out on cellos, with three different driving conditions and three different boundary conditions. Results suggest that there is nothing fundamentally different about the hammer method, compared to other kinds of excitation. The third series of experiments offers an opportunity to explore the difference between the input admittance measuring from one bridge corner to another and that of single strings. The classic measurement is found to give a reasonable approximation to that of all four strings. Some possible differences between the hammer method and normal bowing and implications of the acoustical results are also discussed.

  18. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    Science.gov (United States)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  19. Operator ordering in quantum optics theory and the development of Dirac's symbolic method

    International Nuclear Information System (INIS)

    Fan Hongyi

    2003-01-01

    We present a general unified approach for arranging quantum operators of optical fields into ordered products (normal ordering, antinormal ordering, Weyl ordering (or symmetric ordering)) by fashioning Dirac's symbolic method and representation theory. We propose the technique of integration within an ordered product (IWOP) of operators to realize our goal. The IWOP makes Dirac's representation theory and the symbolic method more transparent and consequently more easily understood. The beauty of Dirac's symbolic method is further revealed. Various applications of the IWOP technique, such as in developing the entangled state representation theory, nonlinear coherent state theory, Wigner function theory, etc, are presented. (review article)

  20. Study on Feasibility of Applying Function Approximation Moment Method to Achieve Reliability-Based Design Optimization

    International Nuclear Information System (INIS)

    Huh, Jae Sung; Kwak, Byung Man

    2011-01-01

    Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated

  1. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint

    Directory of Open Access Journals (Sweden)

    Ang Gong

    2015-12-01

    Full Text Available For Global Navigation Satellite System (GNSS single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  2. Prediction method of long-term reliability in improving residual stresses by means of surface finishing

    International Nuclear Information System (INIS)

    Sera, Takehiko; Hirano, Shinro; Chigusa, Naoki; Okano, Shigetaka; Saida, Kazuyoshi; Mochizuki, Masahito; Nishimoto, Kazutoshi

    2012-01-01

    Surface finishing methods, such as Water Jet Peening (WJP), have been applied to welds in some major components of nuclear power plants as a counter measure to Primary Water Stress Corrosion Cracking (PWSCC). In addition, the methods of surface finishing (buffing treatment) is being standardized, and thus the buffing treatment has been also recognized as the well-established method of improving stress. On the other hand, the long-term stability of peening techniques has been confirmed by accelerated test. However, the effectiveness of stress improvement by surface treatment is limited to thin layers and the effect of complicated residual stress distribution in the weld metal beneath the surface is not strictly taken into account for long-term stability. This paper, therefore, describes the accelerated tests, which confirmed that the long-term stability of the layer subjected to buffing treatment was equal to that subjected to WJP. The long-term reliability of very thin stress improved layer was also confirmed through a trial evaluation by thermal elastic-plastic creep analysis, even if the effect of complicated residual stress distribution in the weld metal was excessively taken into account. Considering the above findings, an approach is proposed for constructing the prediction method of the long-term reliability of stress improvement by surface finishing. (author)

  3. Validity and reliability of a method for assessment of cervical vertebral maturation.

    Science.gov (United States)

    Zhao, Xiao-Guang; Lin, Jiuxiang; Jiang, Jiu-Hui; Wang, Qingzhu; Ng, Sut Hong

    2012-03-01

    To evaluate the validity and reliability of the cervical vertebral maturation (CVM) method with a longitudinal sample. Eighty-six cephalograms from 18 subjects (5 males and 13 females) were selected from the longitudinal database. Total mandibular length was measured on each film; an increased rate served as the gold standard in examination of the validity of the CVM method. Eleven orthodontists, after receiving intensive training in the CVM method, evaluated all films twice. Kendall's W and the weighted kappa statistic were employed. Kendall's W values were higher than 0.8 at both times, indicating strong interobserver reproducibility, but interobserver agreement was documented twice at less than 50%. A wide range of intraobserver agreement was noted (40.7%-79.1%), and substantial intraobserver reproducibility was proved by kappa values (0.53-0.86). With regard to validity, moderate agreement was reported between the gold standard and observer staging at the initial time (kappa values 0.44-0.61). However, agreement seemed to be unacceptable for clinical use, especially in cervical stage 3 (26.8%). Even though the validity and reliability of the CVM method proved statistically acceptable, we suggest that many other growth indicators should be taken into consideration in evaluating adolescent skeletal maturation.

  4. Computerized implementation of higher-order electron-correlation methods and their linear-scaling divide-and-conquer extensions.

    Science.gov (United States)

    Nakano, Masahiko; Yoshikawa, Takeshi; Hirata, So; Seino, Junji; Nakai, Hiromi

    2017-11-05

    We have implemented a linear-scaling divide-and-conquer (DC)-based higher-order coupled-cluster (CC) and Møller-Plesset perturbation theories (MPPT) as well as their combinations automatically by means of the tensor contraction engine, which is a computerized symbolic algebra system. The DC-based energy expressions of the standard CC and MPPT methods and the CC methods augmented with a perturbation correction were proposed for up to high excitation orders [e.g., CCSDTQ, MP4, and CCSD(2) TQ ]. The numerical assessment for hydrogen halide chains, polyene chains, and first coordination sphere (C1) model of photoactive yellow protein has revealed that the DC-based correlation methods provide reliable correlation energies with significantly less computational cost than that of the conventional implementations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Convergency analysis of the high-order mimetic finite difference method

    Energy Technology Data Exchange (ETDEWEB)

    Lipnikov, Konstantin [Los Alamos National Laboratory; Veiga Da Beirao, L [UNIV DEGLI STUDI; Manzini, G [NON LANL

    2008-01-01

    We prove second-order convergence of the conservative variable and its flux in the high-order MFD method. The convergence results are proved for unstructured polyhedral meshes and full tensor diffusion coefficients. For the case of non-constant coefficients, we also develop a new family of high-order MFD methods. Theoretical result are confirmed through numerical experiments.

  6. Accounting for Model Uncertainties Using Reliability Methods - Application to Carbon Dioxide Geologic Sequestration System. Final Report

    International Nuclear Information System (INIS)

    Mok, Chin Man; Doughty, Christine; Zhang, Keni; Pruess, Karsten; Kiureghian, Armen; Zhang, Miao; Kaback, Dawn

    2010-01-01

    A new computer code, CALRELTOUGH, which uses reliability methods to incorporate parameter sensitivity and uncertainty analysis into subsurface flow and transport models, was developed by Geomatrix Consultants, Inc. in collaboration with Lawrence Berkeley National Laboratory and University of California at Berkeley. The CALREL reliability code was developed at the University of California at Berkely for geotechnical applications and the TOUGH family of codes was developed at Lawrence Berkeley National Laboratory for subsurface flow and tranport applications. The integration of the two codes provides provides a new approach to deal with uncertainties in flow and transport modeling of the subsurface, such as those uncertainties associated with hydrogeology parameters, boundary conditions, and initial conditions of subsurface flow and transport using data from site characterization and monitoring for conditioning. The new code enables computation of the reliability of a system and the components that make up the system, instead of calculating the complete probability distributions of model predictions at all locations at all times. The new CALRELTOUGH code has tremendous potential to advance subsurface understanding for a variety of applications including subsurface energy storage, nuclear waste disposal, carbon sequestration, extraction of natural resources, and environmental remediation. The new code was tested on a carbon sequestration problem as part of the Phase I project. Phase iI was not awarded.

  7. A reliable method for reconstituting thymectomized, lethally irradiated guinea pigs with bone marrow cells

    International Nuclear Information System (INIS)

    Terata, N.; Tanio, Y.; Zbar, B.

    1984-01-01

    The authors developed a reliable method for reconstituting thymectomized, lethally irradiated guinea pigs. Injection of 2.5-10 x 10 7 syngeneic bone marrow cells into adult thymectomized, lethally irradiated guinea pigs produced survival of 46-100% of treated animals. Gentamycin sulfate (5 mg/kg of body weight) for 10 days was required for optimal results. Acidified drinking water (pH 2.5) appeared to be required for optimal results. Thymectomized, lethally irradiated, bone marrow reconstituted ('B') guinea pigs had impaired ability to develop delayed cutaneous hypersensitivity to mycobacterial antigens and cutaneous basophil hypersensitivity to keyhole limpet hemocyanin; proliferative responses to phytohemagglutinin were impaired. (Auth.)

  8. Radiologic identification of disaster victims: A simple and reliable method using CT of the paranasal sinuses

    International Nuclear Information System (INIS)

    Ruder, Thomas D.; Kraehenbuehl, Markus; Gotsmy, Walther F.; Mathier, Sandra; Ebert, Lars C.; Thali, Michael J.; Hatch, Gary M.

    2012-01-01

    Objective: To assess the reliability of radiologic identification using visual comparison of ante and post mortem paranasal sinus computed tomography (CT). Subjects and methods: The study was approved by the responsible justice department and university ethics committee. Four blinded readers with varying radiological experience separately compared 100 post mortem to 25 ante mortem head CTs with the goal to identify as many matching pairs as possible (out of 23 possible matches). Sensitivity, specificity, positive and negative predictive values were calculated for all readers. The chi-square test was applied to establish if there was significant difference in sensitivity between radiologists and non-radiologists. Results: For all readers, sensitivity was 83.7%, specificity was 100.0%, negative predictive value (NPV) was 95.4%, positive predictive value (PPV) was 100.0%, and accuracy was 96.3%. For radiologists, sensitivity was 97.8%, NPV was 99.4%, and accuracy was 99.5%. For non-radiologists, average sensitivity was 69.6%, negative predictive value (NPV) was 91.7%, and accuracy was 93.0%. Radiologists achieved a significantly higher sensitivity (p < 0.01) than non-radiologists. Conclusions: Visual comparison of ante mortem and post mortem CT of the head is a robust and reliable method for identifying unknown decedents, particularly in regard to positive matches. The sensitivity and NPV of the method depend on the reader's experience.

  9. How to Map Theory: Reliable Methods Are Fruitless Without Rigorous Theory.

    Science.gov (United States)

    Gray, Kurt

    2017-09-01

    Good science requires both reliable methods and rigorous theory. Theory allows us to build a unified structure of knowledge, to connect the dots of individual studies and reveal the bigger picture. Some have criticized the proliferation of pet "Theories," but generic "theory" is essential to healthy science, because questions of theory are ultimately those of validity. Although reliable methods and rigorous theory are synergistic, Action Identification suggests psychological tension between them: The more we focus on methodological details, the less we notice the broader connections. Therefore, psychology needs to supplement training in methods (how to design studies and analyze data) with training in theory (how to connect studies and synthesize ideas). This article provides a technique for visually outlining theory: theory mapping. Theory mapping contains five elements, which are illustrated with moral judgment and with cars. Also included are 15 additional theory maps provided by experts in emotion, culture, priming, power, stress, ideology, morality, marketing, decision-making, and more (see all at theorymaps.org ). Theory mapping provides both precision and synthesis, which helps to resolve arguments, prevent redundancies, assess the theoretical contribution of papers, and evaluate the likelihood of surprising effects.

  10. Accuracy and Reliability of the Klales et al. (2012) Morphoscopic Pelvic Sexing Method.

    Science.gov (United States)

    Lesciotto, Kate M; Doershuk, Lily J

    2018-01-01

    Klales et al. (2012) devised an ordinal scoring system for the morphoscopic pelvic traits described by Phenice (1969) and used for sex estimation of skeletal remains. The aim of this study was to test the accuracy and reliability of the Klales method using a large sample from the Hamann-Todd collection (n = 279). Two observers were blinded to sex, ancestry, and age and used the Klales et al. method to estimate the sex of each individual. Sex was correctly estimated for females with over 95% accuracy; however, the male allocation accuracy was approximately 50%. Weighted Cohen's kappa and intraclass correlation coefficient analysis for evaluating intra- and interobserver error showed moderate to substantial agreement for all traits. Although each trait can be reliably scored using the Klales method, low accuracy rates and high sex bias indicate better trait descriptions and visual guides are necessary to more accurately reflect the range of morphological variation. © 2017 American Academy of Forensic Sciences.

  11. The Global Optimal Algorithm of Reliable Path Finding Problem Based on Backtracking Method

    Directory of Open Access Journals (Sweden)

    Liang Shen

    2017-01-01

    Full Text Available There is a growing interest in finding a global optimal path in transportation networks particularly when the network suffers from unexpected disturbance. This paper studies the problem of finding a global optimal path to guarantee a given probability of arriving on time in a network with uncertainty, in which the travel time is stochastic instead of deterministic. Traditional path finding methods based on least expected travel time cannot capture the network user’s risk-taking behaviors in path finding. To overcome such limitation, the reliable path finding algorithms have been proposed but the convergence of global optimum is seldom addressed in the literature. This paper integrates the K-shortest path algorithm into Backtracking method to propose a new path finding algorithm under uncertainty. The global optimum of the proposed method can be guaranteed. Numerical examples are conducted to demonstrate the correctness and efficiency of the proposed algorithm.

  12. A summary of methods of predicting reliability life of nuclear equipment with small samples

    International Nuclear Information System (INIS)

    Liao Weixian

    2000-03-01

    Some of nuclear equipment are manufactured in small batch, e.g., 1-3 sets. Their service life may be very difficult to determine experimentally in view of economy and technology. The method combining theoretical analysis with material tests to predict the life of equipment is put forward, based on that equipment consists of parts or elements which are made of different materials. The whole life of an equipment part consists of the crack forming life (i.e., the fatigue life or the damage accumulation life) and the crack extension life. Methods of predicting machine life has systematically summarized with the emphasis on those which use theoretical analysis to substitute large scale prototype experiments. Meanwhile, methods and steps of predicting reliability life have been described by taking into consideration of randomness of various variables and parameters in engineering. Finally, the latest advance and trends of machine life prediction are discussed

  13. Reliability Quantification Method for Safety Critical Software Based on a Finite Test Set

    International Nuclear Information System (INIS)

    Shin, Sung Min; Kim, Hee Eun; Kang, Hyun Gook; Lee, Seung Jun

    2014-01-01

    Software inside of digitalized system have very important role because it may cause irreversible consequence and affect the whole system as common cause failure. However, test-based reliability quantification method for some safety critical software has limitations caused by difficulties in developing input sets as a form of trajectory which is series of successive values of variables. To address these limitations, this study proposed another method which conduct the test using combination of single values of variables. To substitute the trajectory form of input using combination of variables, the possible range of each variable should be identified. For this purpose, assigned range of each variable, logical relations between variables, plant dynamics under certain situation, and characteristics of obtaining information of digital device are considered. A feasibility of the proposed method was confirmed through an application to the Reactor Protection System (RPS) software trip logic

  14. Numerical methods for reliability and safety assessment multiscale and multiphysics systems

    CERN Document Server

    Hami, Abdelkhalak

    2015-01-01

    This book offers unique insight on structural safety and reliability by combining computational methods that address multiphysics problems, involving multiple equations describing different physical phenomena, and multiscale problems, involving discrete sub-problems that together  describe important aspects of a system at multiple scales. The book examines a range of engineering domains and problems using dynamic analysis, nonlinear methods, error estimation, finite element analysis, and other computational techniques. This book also: ·       Introduces novel numerical methods ·       Illustrates new practical applications ·       Examines recent engineering applications ·       Presents up-to-date theoretical results ·       Offers perspective relevant to a wide audience, including teaching faculty/graduate students, researchers, and practicing engineers

  15. Approximation of the Monte Carlo Sampling Method for Reliability Analysis of Structures

    Directory of Open Access Journals (Sweden)

    Mahdi Shadab Far

    2016-01-01

    Full Text Available Structural load types, on the one hand, and structural capacity to withstand these loads, on the other hand, are of a probabilistic nature as they cannot be calculated and presented in a fully deterministic way. As such, the past few decades have witnessed the development of numerous probabilistic approaches towards the analysis and design of structures. Among the conventional methods used to assess structural reliability, the Monte Carlo sampling method has proved to be very convenient and efficient. However, it does suffer from certain disadvantages, the biggest one being the requirement of a very large number of samples to handle small probabilities, leading to a high computational cost. In this paper, a simple algorithm was proposed to estimate low failure probabilities using a small number of samples in conjunction with the Monte Carlo method. This revised approach was then presented in a step-by-step flowchart, for the purpose of easy programming and implementation.

  16. Using the graphs models for evaluating in-core monitoring systems reliability by the method of imiting simulaton

    International Nuclear Information System (INIS)

    Golovanov, M.N.; Zyuzin, N.N.; Levin, G.L.; Chesnokov, A.N.

    1987-01-01

    An approach for estimation of reliability factors of complex reserved systems at early stages of development using the method of imitating simulation is considered. Different types of models, their merits and lacks are given. Features of in-core monitoring systems and advosability of graph model and graph theory element application for estimating reliability of such systems are shown. The results of investigation of the reliability factors of the reactor monitoring, control and core local protection subsystem are shown

  17. A fast method for calculating reliable event supports in tree reconciliations via Pareto optimality.

    Science.gov (United States)

    To, Thu-Hien; Jacox, Edwin; Ranwez, Vincent; Scornavacca, Celine

    2015-11-14

    Given a gene and a species tree, reconciliation methods attempt to retrieve the macro-evolutionary events that best explain the discrepancies between the two tree topologies. The DTL parsimonious approach searches for a most parsimonious reconciliation between a gene tree and a (dated) species tree, considering four possible macro-evolutionary events (speciation, duplication, transfer, and loss) with specific costs. Unfortunately, many events are erroneously predicted due to errors in the input trees, inappropriate input cost values or because of the existence of several equally parsimonious scenarios. It is thus crucial to provide a measure of the reliability for predicted events. It has been recently proposed that the reliability of an event can be estimated via its frequency in the set of most parsimonious reconciliations obtained using a variety of reasonable input cost vectors. To compute such a support, a straightforward but time-consuming approach is to generate the costs slightly departing from the original ones, independently compute the set of all most parsimonious reconciliations for each vector, and combine these sets a posteriori. Another proposed approach uses Pareto-optimality to partition cost values into regions which induce reconciliations with the same number of DTL events. The support of an event is then defined as its frequency in the set of regions. However, often, the number of regions is not large enough to provide reliable supports. We present here a method to compute efficiently event supports via a polynomial-sized graph, which can represent all reconciliations for several different costs. Moreover, two methods are proposed to take into account alternative input costs: either explicitly providing an input cost range or allowing a tolerance for the over cost of a reconciliation. Our methods are faster than the region based method, substantially faster than the sampling-costs approach, and have a higher event-prediction accuracy on

  18. Deterministic factor analysis: methods of integro-differentiation of non-integral order

    Directory of Open Access Journals (Sweden)

    Valentina V. Tarasova

    2016-12-01

    Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes

  19. Application of He's variational iteration method to the fifth-order boundary value problems

    International Nuclear Information System (INIS)

    Shen, S

    2008-01-01

    Variational iteration method is introduced to solve the fifth-order boundary value problems. This method provides an efficient approach to solve this type of problems without discretization and the computation of the Adomian polynomials. Numerical results demonstrate that this method is a promising and powerful tool for solving the fifth-order boundary value problems

  20. A Fifth Order Hybrid Linear Multistep method For the Direct Solution ...

    African Journals Online (AJOL)

    A linear multistep hybrid method (LMHM)with continuous coefficients isconsidered and directly applied to solve third order initial and boundary value problems (IBVPs). The continuous method is used to obtain Multiple Finite Difference Methods (MFDMs) (each of order 5) which are combined as simultaneous numerical ...

  1. Power Cycling Test Method for Reliability Assessment of Power Device Modules in Respect to Temperature Stress

    DEFF Research Database (Denmark)

    Choi, Ui-Min; Blaabjerg, Frede; Jørgensen, Søren

    2018-01-01

    Power cycling test is one of the important tasks to investigate the reliability performance of power device modules in respect to temperature stress. From this, it is able to predict the lifetime of a component in power converters. In this paper, representative power cycling test circuits......, measurement circuits of wear-out failure indicators as well as measurement strategies for different power cycling test circuits are discussed in order to provide the current state of knowledge of this topic by organizing and evaluating current literature. In the first section of this paper, the structure...... of a conventional power device module and its related wear-out failure mechanisms with degradation indicators are discussed. Then, representative power cycling test circuits are introduced. Furthermore, on-state collector-emitter voltage (VCE ON) and forward voltage (VF) measurement circuits for wear-out condition...

  2. Procedures and methods that increase reliability and reproducibility of the transplanted kidney perfusion index

    International Nuclear Information System (INIS)

    Smokvina, A.

    1994-01-01

    At different times following surgery and during various complications, 119 studies were performed on 57 patients. In many patients studies were repeated several times. Twenty-three studies were performed in as many patients, in whom a normal function of the transplanted kidney was established by other diagnostic methods and retrospective analysis. Comparison was made of the perfusion index results obtained by the Hilson et al. method from 1978 and the ones obtained by my own modified method, which for calculating the index also takes into account: the time difference in appearance of the initial portions of the artery and kidney curves; the positioning of the region of interest over the distal part of the aorta; the bolus injection into the arteriovenous shunt of the forearm with high specific activity of small volumes of Tc-99m labelled agents; a fast 0.5 seconds study of data collection; and a standard for normalization of numerical data. The reliability of one or the other method tested by simulated time shift of the peak of arterial curves shows that the deviation percentage from the main index value in the unmodified method is 2-5 times greater than in the modified method. The normal value of the perfusion index applying the modified method is 91-171. (author)

  3. A Reliable Method for the Evaluation of the Anaphylactoid Reaction Caused by Injectable Drugs

    Directory of Open Access Journals (Sweden)

    Fang Wang

    2016-10-01

    Full Text Available Adverse reactions of injectable drugs usually occur at first administration and are closely associated with the dosage and speed of injection. This phenomenon is correlated with the anaphylactoid reaction. However, up to now, study methods based on antigen detection have still not gained wide acceptance and single physiological indicators cannot be utilized to differentiate anaphylactoid reactions from allergic reactions and inflammatory reactions. In this study, a reliable method for the evaluation of anaphylactoid reactions caused by injectable drugs was established by using multiple physiological indicators. We used compound 48/80, ovalbumin and endotoxin as the sensitization agents to induce anaphylactoid, allergic and inflammatory reactions. Different experimental animals (guinea pig and nude rat and different modes of administration (intramuscular, intravenous and intraperitoneal injection and different times (15 min, 30 min and 60 min were evaluated to optimize the study protocol. The results showed that the optimal way to achieve sensitization involved treating guinea pigs with the different agents by intravenous injection for 30 min. Further, seven related humoral factors including 5-HT, SC5b-9, Bb, C4d, IL-6, C3a and histamine were detected by HPLC analysis and ELISA assay to determine their expression level. The results showed that five of them, including 5-HT, SC5b-9, Bb, C4d and IL-6, displayed significant differences between anaphylactoid, allergic and inflammatory reactions, which indicated that their combination could be used to distinguish these three reactions. Then different injectable drugs were used to verify this method and the results showed that the chosen indicators exhibited good correlation with the anaphylactoid reaction which indicated that the established method was both practical and reliable. Our research provides a feasible method for the diagnosis of the serious adverse reactions caused by injectable drugs which

  4. Validity and reliability of the session-RPE method for quantifying training load in karate athletes.

    Science.gov (United States)

    Tabben, M; Tourny, C; Haddad, M; Chaabane, H; Chamari, K; Coquart, J B

    2015-04-24

    To test the construct validity and reliability of the session rating of perceived exertion (sRPE) method by examining the relationship between RPE and physiological parameters (heart rate: HR and blood lactate concentration: [La --] ) and the correlations between sRPE and two HR--based methods for quantifying internal training load (Banister's method and Edwards's method) during karate training camp. Eighteen elite karate athletes: ten men (age: 24.2 ± 2.3 y, body mass: 71.2 ± 9.0 kg, body fat: 8.2 ± 1.3% and height: 178 ± 7 cm) and eight women (age: 22.6 ± 1.2 y, body mass: 59.8 ± 8.4 kg, body fat: 20.2 ± 4.4%, height: 169 ± 4 cm) were included in the study. During training camp, subjects participated in eight karate--training sessions including three training modes (4 tactical--technical, 2 technical--development, and 2 randori training), during which RPE, HR, and [La -- ] were recorded. Significant correlations were found between RPE and physiological parameters (percentage of maximal HR: r = 0.75, 95% CI = 0.64--0.86; [La --] : r = 0.62, 95% CI = 0.49--0.75; P training load ( r = 0.65--0.95; P reliability of the same intensity across training sessions (Cronbach's α = 0.81, 95% CI = 0.61--0.92). This study demonstrates that the sRPE method is valid for quantifying internal training load and intensity in karate.

  5. Block Hybrid Collocation Method with Application to Fourth Order Differential Equations

    Directory of Open Access Journals (Sweden)

    Lee Ken Yap

    2015-01-01

    Full Text Available The block hybrid collocation method with three off-step points is proposed for the direct solution of fourth order ordinary differential equations. The interpolation and collocation techniques are applied on basic polynomial to generate the main and additional methods. These methods are implemented in block form to obtain the approximation at seven points simultaneously. Numerical experiments are conducted to illustrate the efficiency of the method. The method is also applied to solve the fourth order problem from ship dynamics.

  6. Novel Methods to Enhance Precision and Reliability in Muscle Synergy Identification during Walking

    Science.gov (United States)

    Kim, Yushin; Bulea, Thomas C.; Damiano, Diane L.

    2016-01-01

    Muscle synergies are hypothesized to reflect modular control of muscle groups via descending commands sent through multiple neural pathways. Recently, the number of synergies has been reported as a functionally relevant indicator of motor control complexity in individuals with neurological movement disorders. Yet the number of synergies extracted during a given activity, e.g., gait, varies within and across studies, even for unimpaired individuals. With no standardized methods for precise determination, this variability remains unexplained making comparisons across studies and cohorts difficult. Here, we utilize k-means clustering and intra-class and between-level correlation coefficients to precisely discriminate reliable from unreliable synergies. Electromyography (EMG) was recorded bilaterally from eight leg muscles during treadmill walking at self-selected speed. Muscle synergies were extracted from 20 consecutive gait cycles using non-negative matrix factorization. We demonstrate that the number of synergies is highly dependent on the threshold when using the variance accounted for by reconstructed EMG. Beyond use of threshold, our method utilized a quantitative metric to reliably identify four or five synergies underpinning walking in unimpaired adults and revealed synergies having poor reproducibility that should not be considered as true synergies. We show that robust and unreliable synergies emerge similarly, emphasizing the need for careful analysis in those with pathology. PMID:27695403

  7. Identification of a practical and reliable method for the evaluation of litter moisture in turkey production.

    Science.gov (United States)

    Vinco, L J; Giacomelli, S; Campana, L; Chiari, M; Vitale, N; Lombardi, G; Veldkamp, T; Hocking, P M

    2018-02-01

    1. An experiment was conducted to compare 5 different methods for the evaluation of litter moisture. 2. For litter collection and assessment, 55 farms were selected, one shed from each farm was inspected and 9 points were identified within each shed. 3. For each device, used for the evaluation of litter moisture, mean and standard deviation of wetness measures per collection point were assessed. 4. The reliability and overall consistency between the 5 instruments used to measure wetness were high (α = 0.72). 5. Measurement of three out of the 9 collection points were sufficient to provide a reliable assessment of litter moisture throughout the shed. 6. Based on the direct correlation between litter moisture and footpad lesions, litter moisture measurement can be used as a resource based on-farm animal welfare indicator. 7. Among the 5 methods analysed, visual scoring is the most simple and practical, and therefore the best candidate to be used on-farm for animal welfare assessment.

  8. A Newly Developed Method for Computing Reliability Measures in a Water Supply Network

    Directory of Open Access Journals (Sweden)

    Jacek Malinowski

    2016-01-01

    Full Text Available A reliability model of a water supply network has beens examined. Its main features are: a topology that can be decomposed by the so-called state factorization into a (relativelysmall number of derivative networks, each having a series-parallel structure (1, binary-state components (either operative or failed with given flow capacities (2, a multi-state character of the whole network and its sub-networks - a network state is defined as the maximal flow between a source (sources and a sink (sinks (3, all capacities (component, network, and sub-network have integer values (4. As the network operates, its state changes due to component failures, repairs, and replacements. A newly developed method of computing the inter-state transition intensities has been presented. It is based on the so-called state factorization and series-parallel aggregation. The analysis of these intensities shows that the failure-repair process of the considered system is an asymptotically homogenous Markov process. It is also demonstrated how certain reliability parameters useful for the network maintenance planning can be determined on the basis of the asymptotic intensities. For better understanding of the presented method, an illustrative example is given. (original abstract

  9. Uncertainty analysis methods for estimation of reliability of passive system of VHTR

    International Nuclear Information System (INIS)

    Han, S.J.

    2012-01-01

    An estimation of reliability of passive system for the probabilistic safety assessment (PSA) of a very high temperature reactor (VHTR) is under development in Korea. The essential approach of this estimation is to measure the uncertainty of the system performance under a specific accident condition. The uncertainty propagation approach according to the simulation of phenomenological models (computer codes) is adopted as a typical method to estimate the uncertainty for this purpose. This presentation introduced the uncertainty propagation and discussed the related issues focusing on the propagation object and its surrogates. To achieve a sufficient level of depth of uncertainty results, the applicability of the propagation should be carefully reviewed. For an example study, Latin-hypercube sampling (LHS) method as a direct propagation was tested for a specific accident sequence of VHTR. The reactor cavity cooling system (RCCS) developed by KAERI was considered for this example study. This is an air-cooled type passive system that has no active components for its operation. The accident sequence is a low pressure conduction cooling (LPCC) accident that is considered as a design basis accident for the safety design of VHTR. This sequence is due to a large failure of the pressure boundary of the reactor system such as a guillotine break of coolant pipe lines. The presentation discussed the obtained insights (benefit and weakness) to apply an estimation of reliability of passive system

  10. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES, PART TWO: APPLICABILITY OF CURRENT METHODS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; David I. Gertman

    2012-10-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no U.S. nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  11. Human reliability analysis for probabilistic safety assessments - review of methods and issues

    International Nuclear Information System (INIS)

    Srinivas, G.; Guptan, Rajee; Malhotra, P.K.; Ghadge, S.G.; Chandra, Umesh

    2011-01-01

    It is well known that the two major events in World Nuclear Power Plant Operating history, namely the Three Mile Island and Chernobyl, were Human failure events. Subsequent to these two events, several significant changes have been incorporated in Plant Design, Control Room Design and Operator Training to reduce the possibility of Human errors during plant transients. Still, human error contribution to Risk in Nuclear Power Plant operations has been a topic of continued attention for research, development and analysis. Probabilistic Safety Assessments attempt to capture all potential human errors with a scientifically computed failure probability, through Human Reliability Analysis. Several methods are followed by different countries to quantify the Human error probability. This paper reviews the various popular methods being followed, critically examines them with reference to their criticisms and brings out issues for future research. (author)

  12. Regulatory relevant and reliable methods and data for determining the environmental fate of manufactured nanomaterials

    DEFF Research Database (Denmark)

    Baun, Anders; Sayre, Phil; Steinhäuser, Klaus Günter

    2017-01-01

    The widespread use of manufactured nanomaterials (MN) increases the need for describing and predicting their environmental fate and behaviour. A number of recent reviews have addressed the scientific challenges in disclosing the governing processes for the environmental fate and behaviour of MNs,...... data. Gaps do however exist in test methods for environmental fate, such as methods to estimate heteroagglomeration and the tendency for MNs to transform in the environment.......The widespread use of manufactured nanomaterials (MN) increases the need for describing and predicting their environmental fate and behaviour. A number of recent reviews have addressed the scientific challenges in disclosing the governing processes for the environmental fate and behaviour of MNs......, however there has been less focus on the regulatory adequacy of the data available for MN. The aim of this paper is therefore to review data, testing protocols and guidance papers which describe the environmental fate and behaviour of MN with a focus on their regulatory reliability and relevance. Given...

  13. A Novel Method for Decoding Any High-Order Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2014-01-01

    Full Text Available This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.

  14. A two-step method for fast and reliable EUV mask metrology

    Science.gov (United States)

    Helfenstein, Patrick; Mochi, Iacopo; Rajendran, Rajeev; Yoshitake, Shusuke; Ekinci, Yasin

    2017-03-01

    One of the major obstacles towards the implementation of extreme ultraviolet lithography for upcoming technology nodes in semiconductor industry remains the realization of a fast and reliable detection methods patterned mask defects. We are developing a reflective EUV mask-scanning lensless imaging tool (RESCAN), installed at the Swiss Light Source synchrotron at the Paul Scherrer Institut. Our system is based on a two-step defect inspection method. In the first step, a low-resolution defect map is generated by die to die comparison of the diffraction patterns from areas with programmed defects, to those from areas that are known to be defect-free on our test sample. In a later stage, a die to database comparison will be implemented in which the measured diffraction patterns will be compared to those calculated directly from the mask layout. This Scattering Scanning Contrast Microscopy technique operates purely in the Fourier domain without the need to obtain the aerial image and, given a sufficient signal to noise ratio, defects are found in a fast and reliable way, albeit with a location accuracy limited by the spot size of the incident illumination. Having thus identified rough locations for the defects, a fine scan is carried out in the vicinity of these locations. Since our source delivers coherent illumination, we can use an iterative phase-retrieval method to reconstruct the aerial image of the scanned area with - in principle - diffraction-limited resolution without the need of an objective lens. Here, we will focus on the aerial image reconstruction technique and give a few examples to illustrate the capability of the method.

  15. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    Science.gov (United States)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the

  16. Higher order polynomial expansion nodal method for hexagonal core neutronics analysis

    International Nuclear Information System (INIS)

    Jin, Young Cho; Chang, Hyo Kim

    1998-01-01

    A higher-order polynomial expansion nodal(PEN) method is newly formulated as a means to improve the accuracy of the conventional PEN method solutions to multi-group diffusion equations in hexagonal core geometry. The new method is applied to solving various hexagonal core neutronics benchmark problems. The computational accuracy of the higher order PEN method is then compared with that of the conventional PEN method, the analytic function expansion nodal (AFEN) method, and the ANC-H method. It is demonstrated that the higher order PEN method improves the accuracy of the conventional PEN method and that it compares very well with the other nodal methods like the AFEN and ANC-H methods in accuracy

  17. Human reliability analysis of errors of commission: a review of methods and applications

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2007-06-15

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  18. Human reliability analysis of errors of commission: a review of methods and applications

    International Nuclear Information System (INIS)

    Reer, B.

    2007-06-01

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  19. Re-establishing the pecking order: Niche models reliably predict suitable habitats for the reintroduction of red-billed oxpeckers.

    Science.gov (United States)

    Kalle, Riddhika; Combrink, Leigh; Ramesh, Tharmalingam; Downs, Colleen T

    2017-03-01

    Distributions of avian mutualists are affected by changes in biotic interactions and environmental conditions driven directly/indirectly by human actions. The range contraction of red-billed oxpeckers ( Buphagus erythrorhynchus ) in South Africa is partly a result of the widespread use of acaracides (i.e., mainly cattle dips), toxic to both ticks and oxpeckers. We predicted the habitat suitability of red-billed oxpeckers in South Africa using ensemble models to assist the ongoing reintroduction efforts and to identify new reintroduction sites for population recovery. The distribution of red-billed oxpeckers was influenced by moderate to high tree cover, woodland habitats, and starling density (a proxy for cavity-nesting birds) with regard to nest-site characteristics. Consumable resources (host and tick density), bioclimate, surface water body density, and proximity to protected areas were other influential predictors. Our models estimated 42,576.88-98,506.98 km 2 of highly suitable habitat (0.5-1) covering the majority of Limpopo, Mpumalanga, North West, a substantial portion of northern KwaZulu-Natal (KZN) and the Gauteng Province. Niche models reliably predicted suitable habitat in 40%-61% of the reintroduction sites where breeding is currently successful. Ensemble, boosted regression trees and generalized additive models predicted few suitable areas in the Eastern Cape and south of KZN that are part of the historic range. A few southern areas in the Northern Cape, outside the historic range, also had suitable sites predicted. Our models are a promising decision support tool for guiding reintroduction programs at macroscales. Apart from active reintroductions, conservation programs should encourage farmers and/or landowners to use oxpecker-compatible agrochemicals and set up adequate nest boxes to facilitate the population recovery of the red-billed oxpecker, particularly in human-modified landscapes. To ensure long-term conservation success, we suggest that

  20. Two new solutions to the third-order symplectic integration method

    International Nuclear Information System (INIS)

    Iwatsu, Reima

    2009-01-01

    Two new solutions are obtained for the symplecticity conditions of explicit third-order partitioned Runge-Kutta time integration method. One of them has larger stability limit and better dispersion property than the Ruth's method.

  1. An Efficient Higher-Order Quasilinearization Method for Solving Nonlinear BVPs

    Directory of Open Access Journals (Sweden)

    Eman S. Alaidarous

    2013-01-01

    Full Text Available In this research paper, we present higher-order quasilinearization methods for the boundary value problems as well as coupled boundary value problems. The construction of higher-order convergent methods depends on a decomposition method which is different from Adomain decomposition method (Motsa and Sibanda, 2013. The reported method is very general and can be extended to desired order of convergence for highly nonlinear differential equations and also computationally superior to proposed iterative method based on Adomain decomposition because our proposed iterative scheme avoids the calculations of Adomain polynomials and achieves the same computational order of convergence as authors have claimed in Motsa and Sibanda, 2013. In order to check the validity and computational performance, the constructed iterative schemes are also successfully applied to bifurcation problems to calculate the values of critical parameters. The numerical performance is also tested for one-dimension Bratu and Frank-Kamenetzkii equations.

  2. A high order multi-resolution solver for the Poisson equation with application to vortex methods

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Spietz, Henrik Juul; Walther, Jens Honore

    A high order method is presented for solving the Poisson equation subject to mixed free-space and periodic boundary conditions by using fast Fourier transforms (FFT). The high order convergence is achieved by deriving mollified Green’s functions from a high order regularization function which...

  3. An Alternating Direction Method for Convex Quadratic Second-Order Cone Programming with Bounded Constraints

    Directory of Open Access Journals (Sweden)

    Xuewen Mu

    2015-01-01

    quadratic programming over second-order cones and a bounded set. At each iteration, we only need to compute the metric projection onto the second-order cones and the projection onto the bound set. The result of convergence is given. Numerical results demonstrate that our method is efficient for the convex quadratic second-order cone programming problems with bounded constraints.

  4. Improving the reliability of POD curves in NDI methods using a Bayesian inversion approach for uncertainty quantification

    Science.gov (United States)

    Ben Abdessalem, A.; Jenson, F.; Calmon, P.

    2016-02-01

    This contribution provides an example of the possible advantages of adopting a Bayesian inversion approach to uncertainty quantification in nondestructive inspection methods. In such problem, the uncertainty associated to the random parameters is not always known and needs to be characterised from scattering signal measurements. The uncertainties may then correctly propagated in order to determine a reliable probability of detection curve. To this end, we establish a general Bayesian framework based on a non-parametric maximum likelihood function formulation and some priors from expert knowledge. However, the presented inverse problem is time-consuming and computationally intensive. To cope with this difficulty, we replace the real model by a surrogate one in order to speed-up the model evaluation and to make the problem to be computationally feasible for implementation. The least squares support vector regression is adopted as metamodelling technique due to its robustness to deal with non-linear problems. We illustrate the usefulness of this methodology through the control of tube with enclosed defect using ultrasonic inspection method.

  5. Using DOProC method in reliability assessment of steel elements exposed to fatigue

    Directory of Open Access Journals (Sweden)

    Krejsa Martin

    2017-01-01

    Full Text Available Fatigue crack damage depends on a number of stress range cycles. This is a time factor in the course of reliability for the entire designed service life. Three sizes are important for the characteristics of the propagation of fatigue cracks - initial size, detectable size and acceptable size. The theoretical model of fatigue crack progression can be based on a linear fracture mechanic. Depending on location of an initial crack, the crack may propagate in structural element e.g. from the edge or from the surface. When determining the required degree of reliability, it is possible to specify the time of the first inspection of the construction which will focus on the fatigue damage. Using a conditional probability and Bayesian approach, times for subsequent inspections can be determined. For probabilistic modelling of fatigue crack progression was used the original and new probabilistic method - the Direct Optimized Probabilistic Calculation (“DOProC”, which uses a purely numerical approach without any simulation techniques or approximation approach based on optimized numerical integration.

  6. Reliability of the k{sub 0}-standardization method using geological sample analysed in a proficiency test

    Energy Technology Data Exchange (ETDEWEB)

    Pelaes, Ana Clara O.; Menezes, Maria Ângela de B.C., E-mail: anacpelaes@gmail.com, E-mail: menezes@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-11-01

    The Neutron Activation Analysis (NAA) is an analytical technique to determine the elemental chemical composition in samples of several matrices, that has been applied by the Laboratory for Neutron Activation Analysis, located at Centro de Desenvolvimento da Tecnologia Nuclear /Comissão Nacional de Energia Nuclear (Nuclear Technology Development Center/Brazilian Commission for Nuclear Energy), CDTN/CNEN, since the starting up of the TRIGA MARK I IPR-R1 reactor, in 1960. Among the methods of application of the technique, the k{sub 0}-standardization method, which was established at CDTN in 1995, is the most efficient and in 2003 it was reestablished and optimized. In order to verify the reproducibility of the results generated by the application of the k{sub 0}-standardization method at CDTN, aliquots of a geological sample sent by WEPAL (Wageningen Evaluating Programs for Analytical Laboratories) were analysed and its results were compared with the results obtained through the Intercomparison of Results organized by the International Atomic Energy Agency in 2015. WEPAL is an accredited institution for the organisation of interlaboratory studies, preparing and organizing proficiency testing schemes all over the world. Therefore, the comparison with the results provided aims to contribute to the continuous improvement of the quality of the results obtained by the CDTN. The objective of this study was to verify the reliability of the method applied two years after the intercomparison round. (author)

  7. Nuclear material enrichment identification method based on cross-correlation and high order spectra

    International Nuclear Information System (INIS)

    Yang Fan; Wei Biao; Feng Peng; Mi Deling; Ren Yong

    2013-01-01

    In order to enhance the sensitivity of nuclear material identification system (NMIS) against the change of nuclear material enrichment, the principle of high order statistic feature is introduced and applied to traditional NMIS. We present a new enrichment identification method based on cross-correlation and high order spectrum algorithm. By applying the identification method to NMIS, the 3D graphs with nuclear material character are presented and can be used as new signatures to identify the enrichment of nuclear materials. The simulation result shows that the identification method could suppress the background noises, electronic system noises, and improve the sensitivity against enrichment change to exponential order with no system structure modification. (authors)

  8. Numerical simulation of stratified shear flow using a higher order Taylor series expansion method

    Energy Technology Data Exchange (ETDEWEB)

    Iwashige, Kengo; Ikeda, Takashi [Hitachi, Ltd. (Japan)

    1995-09-01

    A higher order Taylor series expansion method is applied to two-dimensional numerical simulation of stratified shear flow. In the present study, central difference scheme-like method is adopted for an even expansion order, and upwind difference scheme-like method is adopted for an odd order, and the expansion order is variable. To evaluate the effects of expansion order upon the numerical results, a stratified shear flow test in a rectangular channel (Reynolds number = 1.7x10{sup 4}) is carried out, and the numerical velocity and temperature fields are compared with experimental results measured by laser Doppler velocimetry thermocouples. The results confirm that the higher and odd order methods can simulate mean velocity distributions, root-mean-square velocity fluctuations, Reynolds stress, temperature distributions, and root-mean-square temperature fluctuations.

  9. A Systematic Review of Statistical Methods Used to Test for Reliability of Medical Instruments Measuring Continuous Variables

    Directory of Open Access Journals (Sweden)

    Rafdzah Zaki

    2013-06-01

    Full Text Available   Objective(s: Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice.   Materials and Methods: In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. Results: The Intra-class Correlation Coefficient (ICC is the most popular method with 25 (60% studies having used this method followed by the comparing means (8 or 19%. Out of 25 studies using the ICC, only 7 (28% reported the confidence intervals and types of ICC used. Most studies (71% also tested the agreement of instruments. Conclusion: This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.

  10. Control of the large renal vein in limited dissected space during laparoscopic nephrectomy: a simple and reliable method

    NARCIS (Netherlands)

    Kijvikai, Kittinut; Laguna, M. Pilar; de la Rosette, Jean

    2006-01-01

    We describe our technique for large renal vein control in the limited dissected space during laparoscopic nephrectomy. This technique is a simple, inexpensive and reliable method, especially for large and short renal vein ligation

  11. Method of moments solution of volume integral equations using higher-order hierarchical Legendre basis functions

    DEFF Research Database (Denmark)

    Kim, Oleksiy S.; Jørgensen, Erik; Meincke, Peter

    2004-01-01

    An efficient higher-order method of moments (MoM) solution of volume integral equations is presented. The higher-order MoM solution is based on higher-order hierarchical Legendre basis functions and higher-order geometry modeling. An unstructured mesh composed of 8-node trilinear and/or curved 27...... of magnitude in comparison to existing higher-order hierarchical basis functions. Consequently, an iterative solver can be applied even for high expansion orders. Numerical results demonstrate excellent agreement with the analytical Mie series solution for a dielectric sphere as well as with results obtained...

  12. New Methods for Building-In and Improvement of Integrated Circuit Reliability

    NARCIS (Netherlands)

    van der Pol, J.A.; van der Pol, Jacob Antonius

    2000-01-01

    Over the past 30 years the reliability of semiconductor products has improved by a factor of 100 while at the same time the complexity of the circuits has increased by a factor 105. This 7-decade reliability improvement has been realised by implementing a sophisticated reliability assurance system

  13. Formal Solutions for Polarized Radiative Transfer. II. High-order Methods

    Energy Technology Data Exchange (ETDEWEB)

    Janett, Gioele; Steiner, Oskar; Belluzzi, Luca, E-mail: gioele.janett@irsol.ch [Istituto Ricerche Solari Locarno (IRSOL), 6605 Locarno-Monti (Switzerland)

    2017-08-20

    When integrating the radiative transfer equation for polarized light, the necessity of high-order numerical methods is well known. In fact, well-performing high-order formal solvers enable higher accuracy and the use of coarser spatial grids. Aiming to provide a clear comparison between formal solvers, this work presents different high-order numerical schemes and applies the systematic analysis proposed by Janett et al., emphasizing their advantages and drawbacks in terms of order of accuracy, stability, and computational cost.

  14. Simple and Reliable Method to Estimate the Fingertip Static Coefficient of Friction in Precision Grip.

    Science.gov (United States)

    Barrea, Allan; Bulens, David Cordova; Lefevre, Philippe; Thonnard, Jean-Louis

    2016-01-01

    The static coefficient of friction (µ static ) plays an important role in dexterous object manipulation. Minimal normal force (i.e., grip force) needed to avoid dropping an object is determined by the tangential force at the fingertip-object contact and the frictional properties of the skin-object contact. Although frequently assumed to be constant for all levels of normal force (NF, the force normal to the contact), µ static actually varies nonlinearly with NF and increases at low NF levels. No method is currently available to measure the relationship between µ static and NF easily. Therefore, we propose a new method allowing the simple and reliable measurement of the fingertip µ static at different NF levels, as well as an algorithm for determining µ static from measured forces and torques. Our method is based on active, back-and-forth movements of a subject's finger on the surface of a fixed six-axis force and torque sensor. µ static is computed as the ratio of the tangential to the normal force at slip onset. A negative power law captures the relationship between µ static and NF. Our method allows the continuous estimation of µ static as a function of NF during dexterous manipulation, based on the relationship between µ static and NF measured before manipulation.

  15. A critical evaluation of deterministic methods in size optimisation of reliable and cost effective standalone hybrid renewable energy systems

    International Nuclear Information System (INIS)

    Maheri, Alireza

    2014-01-01

    Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a

  16. Methods for Calculating Frequency of Maintenance of Complex Information Security System Based on Dynamics of Its Reliability

    Science.gov (United States)

    Varlataya, S. K.; Evdokimov, V. E.; Urzov, A. Y.

    2017-11-01

    This article describes a process of calculating a certain complex information security system (CISS) reliability using the example of the technospheric security management model as well as ability to determine the frequency of its maintenance using the system reliability parameter which allows one to assess man-made risks and to forecast natural and man-made emergencies. The relevance of this article is explained by the fact the CISS reliability is closely related to information security (IS) risks. Since reliability (or resiliency) is a probabilistic characteristic of the system showing the possibility of its failure (and as a consequence - threats to the protected information assets emergence), it is seen as a component of the overall IS risk in the system. As it is known, there is a certain acceptable level of IS risk assigned by experts for a particular information system; in case of reliability being a risk-forming factor maintaining an acceptable risk level should be carried out by the routine analysis of the condition of CISS and its elements and their timely service. The article presents a reliability parameter calculation for the CISS with a mixed type of element connection, a formula of the dynamics of such system reliability is written. The chart of CISS reliability change is a S-shaped curve which can be divided into 3 periods: almost invariable high level of reliability, uniform reliability reduction, almost invariable low level of reliability. Setting the minimum acceptable level of reliability, the graph (or formula) can be used to determine the period of time during which the system would meet requirements. Ideally, this period should not be longer than the first period of the graph. Thus, the proposed method of calculating the CISS maintenance frequency helps to solve a voluminous and critical task of the information assets risk management.

  17. Computing interval-valued reliability measures: application of optimal control methods

    DEFF Research Database (Denmark)

    Kozin, Igor; Krymsky, Victor

    2017-01-01

    The paper describes an approach to deriving interval-valued reliability measures given partial statistical information on the occurrence of failures. We apply methods of optimal control theory, in particular, Pontryagin’s principle of maximum to solve the non-linear optimisation problem and derive...... the probabilistic interval-valued quantities of interest. It is proven that the optimisation problem can be translated into another problem statement that can be solved on the class of piecewise continuous probability density functions (pdfs). This class often consists of piecewise exponential pdfs which appear...... as soon as among the constraints there are bounds on a failure rate of a component under consideration. Finding the number of switching points of the piecewise continuous pdfs and their values becomes the focus of the approach described in the paper. Examples are provided....

  18. A novel reliable method of DNA extraction from olive oil suitable for molecular traceability.

    Science.gov (United States)

    Raieta, Katia; Muccillo, Livio; Colantuoni, Vittorio

    2015-04-01

    Extra virgin olive oil production has a worldwide economic impact. The use of this brand, however, is of great concern to Institutions and private industries because of the increasing number of fraud and adulteration attempts to the market products. Here, we present a novel, reliable and not expensive method for extracting the DNA from commercial virgin and extra virgin olive oils. The DNA is stable overtime and amenable for molecular analyses; in fact, by carrying out simple sequence repeats (SSRs) markers analysis, we characterise the genetic profile of monovarietal olive oils. By comparing the oil-derived pattern with that of the corresponding tree, we can unambiguously identify four cultivars from Samnium, a region of Southern Italy, and distinguish them from reference and more widely used varieties. Through a parentage statistical analysis, we also identify the putative pollinators, establishing an unprecedented and powerful tool for olive oil traceability. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. A reliability design method for a lithium-ion battery pack considering the thermal disequilibrium in electric vehicles

    Science.gov (United States)

    Xia, Quan; Wang, Zili; Ren, Yi; Sun, Bo; Yang, Dezhen; Feng, Qiang

    2018-05-01

    With the rapid development of lithium-ion battery technology in the electric vehicle (EV) industry, the lifetime of the battery cell increases substantially; however, the reliability of the battery pack is still inadequate. Because of the complexity of the battery pack, a reliability design method for a lithium-ion battery pack considering the thermal disequilibrium is proposed in this paper based on cell redundancy. Based on this method, a three-dimensional electric-thermal-flow-coupled model, a stochastic degradation model of cells under field dynamic conditions and a multi-state system reliability model of a battery pack are established. The relationships between the multi-physics coupling model, the degradation model and the system reliability model are first constructed to analyze the reliability of the battery pack and followed by analysis examples with different redundancy strategies. By comparing the reliability of battery packs of different redundant cell numbers and configurations, several conclusions for the redundancy strategy are obtained. More notably, the reliability does not monotonically increase with the number of redundant cells for the thermal disequilibrium effects. In this work, the reliability of a 6 × 5 parallel-series configuration is the optimal system structure. In addition, the effect of the cell arrangement and cooling conditions are investigated.

  20. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  1. First-order Convex Optimization Methods for Signal and Image Processing

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm

    2012-01-01

    In this thesis we investigate the use of first-order convex optimization methods applied to problems in signal and image processing. First we make a general introduction to convex optimization, first-order methods and their iteration complexity. Then we look at different techniques, which can...... be used with first-order methods such as smoothing, Lagrange multipliers and proximal gradient methods. We continue by presenting different applications of convex optimization and notable convex formulations with an emphasis on inverse problems and sparse signal processing. We also describe the multiple...

  2. Khater method for nonlinear Sharma Tasso-Olever (STO) equation of fractional order

    Science.gov (United States)

    Bibi, Sadaf; Mohyud-Din, Syed Tauseef; Khan, Umar; Ahmed, Naveed

    In this work, we have implemented a direct method, known as Khater method to establish exact solutions of nonlinear partial differential equations of fractional order. Number of solutions provided by this method is greater than other traditional methods. Exact solutions of nonlinear fractional order Sharma Tasso-Olever (STO) equation are expressed in terms of kink, travelling wave, periodic and solitary wave solutions. Modified Riemann-Liouville derivative and Fractional complex transform have been used for compatibility with fractional order sense. Solutions have been graphically simulated for understanding the physical aspects and importance of the method. A comparative discussion between our established results and the results obtained by existing ones is also presented. Our results clearly reveal that the proposed method is an effective, powerful and straightforward technique to work out new solutions of various types of differential equations of non-integer order in the fields of applied sciences and engineering.

  3. Fourth-order perturbative extension of the single-double excitation coupled-cluster method

    International Nuclear Information System (INIS)

    Derevianko, Andrei; Emmons, Erik D.

    2002-01-01

    Fourth-order many-body corrections to matrix elements for atoms with one valence electron are derived. The obtained diagrams are classified using coupled-cluster-inspired separation into contributions from n-particle excitations from the lowest-order wave function. The complete set of fourth-order diagrams involves only connected single, double, and triple excitations and disconnected quadruple excitations. Approximately half of the fourth-order diagrams are not accounted for by the popular coupled-cluster method truncated at single and double excitations (CCSD). Explicit formulas are tabulated for the entire set of fourth-order diagrams missed by the CCSD method and its linearized version, i.e., contributions from connected triple and disconnected quadruple excitations. A partial summation scheme of the derived fourth-order contributions to all orders of perturbation theory is proposed

  4. A human reliability assessment screening method for the NRU upgrade project

    International Nuclear Information System (INIS)

    Bremner, F.M.; Alsop, C.J.

    1997-01-01

    The National Research Universal (NRU) reactor is a 130MW, low pressure, heavy water cooled and moderated research reactor. The reactor is used for research, both in support of Canada's CANDU development program, and for a wide variety of other research applications. In addition, NRU plays an important part in the production of medical isotopes, e.g., generating 80% of worldwide supplies of Molybdenum-99. NRU is owned and operated by Atomic Energy of Canada Ltd. (AECL), and is currently undergoing upgrading as part of AECL's continuing commitment to operate their facilities in a safe manner. As part of these upgrades both deterministic and probabilistic safety assessments are being carried out. It was recognized that the assignment of Human Error Probabilities (HEPs) is an important part of the Probabilistic Safety Assessment (PSA) studies, particularly for a facility whose design predates modern ergonomic practices, and which will undergo a series of backfitted modifications whilst continuing to operate. A simple Human Reliability Assessment (HRA) screening method, looking at both pre- and post-accident errors, was used in the initial safety studies. However, following review of this method within AECL and externally by the regulator, it was judged that benefits could be gained for future error reduction by including additional features, as later described in this document. The HRA development project consisted of several stages; needs analysis, literature review, development of method (including testing and evaluation), and implementation. This paper discusses each of these stages in further detail. (author)

  5. Lagrange-Noether method for solving second-order differential equations

    Institute of Scientific and Technical Information of China (English)

    Wu Hui-Bin; Wu Run-Heng

    2009-01-01

    The purpose of this paper is to provide a new method called the Lagrange-Noether method for solving second-order differential equations. The method is,firstly,to write the second-order differential equations completely or partially in the form of Lagrange equations,and secondly,to obtain the integrals of the equations by using the Noether theory of the Lagrange system. An example is given to illustrate the application of the result.

  6. Non-asymptotic fractional order differentiators via an algebraic parametric method

    KAUST Repository

    Liu, Dayan

    2012-08-01

    Recently, Mboup, Join and Fliess [27], [28] introduced non-asymptotic integer order differentiators by using an algebraic parametric estimation method [7], [8]. In this paper, in order to obtain non-asymptotic fractional order differentiators we apply this algebraic parametric method to truncated expansions of fractional Taylor series based on the Jumarie\\'s modified Riemann-Liouville derivative [14]. Exact and simple formulae for these differentiators are given where a sliding integration window of a noisy signal involving Jacobi polynomials is used without complex mathematical deduction. The efficiency and the stability with respect to corrupting noises of the proposed fractional order differentiators are shown in numerical simulations. © 2012 IEEE.

  7. Non-asymptotic fractional order differentiators via an algebraic parametric method

    KAUST Repository

    Liu, Dayan; Gibaru, O.; Perruquetti, Wilfrid

    2012-01-01

    Recently, Mboup, Join and Fliess [27], [28] introduced non-asymptotic integer order differentiators by using an algebraic parametric estimation method [7], [8]. In this paper, in order to obtain non-asymptotic fractional order differentiators we apply this algebraic parametric method to truncated expansions of fractional Taylor series based on the Jumarie's modified Riemann-Liouville derivative [14]. Exact and simple formulae for these differentiators are given where a sliding integration window of a noisy signal involving Jacobi polynomials is used without complex mathematical deduction. The efficiency and the stability with respect to corrupting noises of the proposed fractional order differentiators are shown in numerical simulations. © 2012 IEEE.

  8. Non-binary decomposition trees - a method of reliability computation for systems with known minimal paths/cuts

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, Jacek

    2004-05-01

    A coherent system with independent components and known minimal paths (cuts) is considered. In order to compute its reliability, a tree structure T is constructed whose nodes contain the modified minimal paths (cuts) and numerical values. The value of a non-leaf node is a function of its child nodes' values. The values of leaf nodes are calculated from a simple formula. The value of the root node is the system's failure probability (reliability). Subsequently, an algorithm computing the system's failure probability (reliability) is constructed. The algorithm scans all nodes of T using a stack structure for this purpose. The nodes of T are alternately put on and removed from the stack, their data being modified in the process. Once the algorithm has terminated, the stack contains only the final modification of the root node of T, and its value is equal to the system's failure probability (reliability)

  9. Non-binary decomposition trees - a method of reliability computation for systems with known minimal paths/cuts

    International Nuclear Information System (INIS)

    Malinowski, Jacek

    2004-01-01

    A coherent system with independent components and known minimal paths (cuts) is considered. In order to compute its reliability, a tree structure T is constructed whose nodes contain the modified minimal paths (cuts) and numerical values. The value of a non-leaf node is a function of its child nodes' values. The values of leaf nodes are calculated from a simple formula. The value of the root node is the system's failure probability (reliability). Subsequently, an algorithm computing the system's failure probability (reliability) is constructed. The algorithm scans all nodes of T using a stack structure for this purpose. The nodes of T are alternately put on and removed from the stack, their data being modified in the process. Once the algorithm has terminated, the stack contains only the final modification of the root node of T, and its value is equal to the system's failure probability (reliability)

  10. Diagnosing developmental dyscalculia on the basis of reliable single case FMRI methods: promises and limitations.

    Directory of Open Access Journals (Sweden)

    Philipp Johannes Dinkel

    Full Text Available FMRI-studies are mostly based on a group study approach, either analyzing one group or comparing multiple groups, or on approaches that correlate brain activation with clinically relevant criteria or behavioral measures. In this study we investigate the potential of fMRI-techniques focusing on individual differences in brain activation within a test-retest reliability context. We employ a single-case analysis approach, which contrasts dyscalculic children with a control group of typically developing children. In a second step, a support-vector machine analysis and cluster analysis techniques served to investigate similarities in multivariate brain activation patterns. Children were confronted with a non-symbolic number comparison and a non-symbolic exact calculation task during fMRI acquisition. Conventional second level group comparison analysis only showed small differences around the angular gyrus bilaterally and the left parieto-occipital sulcus. Analyses based on single-case statistical procedures revealed that developmental dyscalculia is characterized by individual differences predominantly in visual processing areas. Dyscalculic children seemed to compensate for relative under-activation in the primary visual cortex through an upregulation in higher visual areas. However, overlap in deviant activation was low for the dyscalculic children, indicating that developmental dyscalculia is a disorder characterized by heterogeneous brain activation differences. Using support vector machine analysis and cluster analysis, we tried to group dyscalculic and typically developing children according to brain activation. Fronto-parietal systems seem to qualify for a distinction between the two groups. However, this was only effective when reliable brain activations of both tasks were employed simultaneously. Results suggest that deficits in number representation in the visual-parietal cortex get compensated for through finger related aspects of number

  11. Diagnosing developmental dyscalculia on the basis of reliable single case FMRI methods: promises and limitations.

    Science.gov (United States)

    Dinkel, Philipp Johannes; Willmes, Klaus; Krinzinger, Helga; Konrad, Kerstin; Koten, Jan Willem

    2013-01-01

    FMRI-studies are mostly based on a group study approach, either analyzing one group or comparing multiple groups, or on approaches that correlate brain activation with clinically relevant criteria or behavioral measures. In this study we investigate the potential of fMRI-techniques focusing on individual differences in brain activation within a test-retest reliability context. We employ a single-case analysis approach, which contrasts dyscalculic children with a control group of typically developing children. In a second step, a support-vector machine analysis and cluster analysis techniques served to investigate similarities in multivariate brain activation patterns. Children were confronted with a non-symbolic number comparison and a non-symbolic exact calculation task during fMRI acquisition. Conventional second level group comparison analysis only showed small differences around the angular gyrus bilaterally and the left parieto-occipital sulcus. Analyses based on single-case statistical procedures revealed that developmental dyscalculia is characterized by individual differences predominantly in visual processing areas. Dyscalculic children seemed to compensate for relative under-activation in the primary visual cortex through an upregulation in higher visual areas. However, overlap in deviant activation was low for the dyscalculic children, indicating that developmental dyscalculia is a disorder characterized by heterogeneous brain activation differences. Using support vector machine analysis and cluster analysis, we tried to group dyscalculic and typically developing children according to brain activation. Fronto-parietal systems seem to qualify for a distinction between the two groups. However, this was only effective when reliable brain activations of both tasks were employed simultaneously. Results suggest that deficits in number representation in the visual-parietal cortex get compensated for through finger related aspects of number representation in

  12. Weak Second Order Explicit Stabilized Methods for Stiff Stochastic Differential Equations

    KAUST Repository

    Abdulle, Assyr

    2013-01-01

    We introduce a new family of explicit integrators for stiff Itô stochastic differential equations (SDEs) of weak order two. These numerical methods belong to the class of one-step stabilized methods with extended stability domains and do not suffer from the step size reduction faced by standard explicit methods. The family is based on the standard second order orthogonal Runge-Kutta-Chebyshev (ROCK2) methods for deterministic problems. The convergence, meansquare, and asymptotic stability properties of the methods are analyzed. Numerical experiments, including applications to nonlinear SDEs and parabolic stochastic partial differential equations are presented and confirm the theoretical results. © 2013 Society for Industrial and Applied Mathematics.

  13. Study on Differential Algebraic Method of Aberrations up to Arbitrary Order for Combined Electromagnetic Focusing Systems

    Institute of Scientific and Technical Information of China (English)

    CHENG Min; TANG Tiantong; YAO Zhenhua; ZHU Jingping

    2001-01-01

    Differential algebraic method is apowerful technique in computer numerical analysisbased on nonstandard analysis and formal series the-ory. It can compute arbitrary high order derivativeswith excellent accuracy. The principle of differentialalgebraic method is applied to calculate high orderaberrations of combined electromagnetic focusing sys-tems. As an example, third-order geometric aberra-tion coefficients of an actual combined electromagneticfocusing system were calculated. The arbitrary highorder aberrations are conveniently calculated by dif-ferential algebraic method and the fifth-order aberra-tion diagrams are given.

  14. A Method for Improving Reliability of Radiation Detection using Deep Learning Framework

    International Nuclear Information System (INIS)

    Chang, Hojong; Kim, Tae-Ho; Han, Byunghun; Kim, Hyunduk; Kim, Ki-duk

    2017-01-01

    Radiation detection is essential technology for overall field of radiation and nuclear engineering. Previously, technology for radiation detection composes of preparation of the table of the input spectrum to output spectrum in advance, which requires simulation of numerous predicted output spectrum with simulation using parameters modeling the spectrum. In this paper, we propose new technique to improve the performance of radiation detector. The software in the radiation detector has been stagnant for a while with possible intrinsic error of simulation. In the proposed method, to predict the input source using output spectrum measured by radiation detector is performed using deep neural network. With highly complex model, we expect that the complex pattern between data and the label can be captured well. Furthermore, the radiation detector should be calibrated regularly and beforehand. We propose a method to calibrate radiation detector using GAN. We hope that the power of deep learning may also reach to radiation detectors and make huge improvement on the field. Using improved radiation detector, the reliability of detection would be confident, and there are many tasks remaining to solve using deep learning in nuclear engineering society.

  15. Development of a Method for Quantifying the Reliability of Nuclear Safety-Related Software

    International Nuclear Information System (INIS)

    Yi Zhang; Golay, Michael W.

    2003-01-01

    The work of our project is intended to help introducing digital technologies into nuclear power into nuclear power plant safety related software applications. In our project we utilize a combination of modern software engineering methods: design process discipline and feedback, formal methods, automated computer aided software engineering tools, automatic code generation, and extensive feasible structure flow path testing to improve software quality. The tactics include ensuring that the software structure is kept simple, permitting routine testing during design development, permitting extensive finished product testing in the input data space of most likely service and using test-based Bayesian updating to estimate the probability that a random software input will encounter an error upon execution. From the results obtained the software reliability can be both improved and its value estimated. Hopefully our success in the project's work can aid the transition of the nuclear enterprise into the modern information world. In our work, we have been using the proprietary sample software, the digital Signal Validation Algorithm (SVA), provided by Westinghouse. Also our work is being done with their collaboration. The SVA software is used for selecting the plant instrumentation signal set which is to be used as the input the digital Plant Protection System (PPS). This is the system that automatically decides whether to trip the reactor. In our work, we are using -001 computer assisted software engineering (CASE) tool of Hamilton Technologies Inc. This tool is capable of stating the syntactic structure of a program reflecting its state requirements, logical functions and data structure

  16. Reliability Analysis of Corroded Reinforced Concrete Beams Using Enhanced HL-RF Method

    Directory of Open Access Journals (Sweden)

    Arash Mohammadi Farsani

    2015-12-01

    Full Text Available Steel corrosion of bars in concrete structures is a complex process which leads to the reduction of the cross-section bars and decreasing the resistance of the concrete and steel materials. In this study, reliability analysis of a reinforced concrete beam with corrosion defects under the distributed load was investigated using the enhanced Hasofer-Lind and Rackwitz-Fiessler (EHL-RF method based on relaxed approach. Robustness of the EHL-RF algorithm was compared with the HL-RF using a complicated example. It was seen that the EHL-RF algorithm is more robust than the HL-RF method. Finally, the effects of corrosion time were investigated using the EHL-RF algorithm for a reinforced concrete beam based on flexural strength in the pitting and general corrosion. The model uncertainties were considered in the resistance and load terms of flexural strength limit state function. The results illustrated that increasing the corrosion time-period leads to increase in the failure probability of the corroded concrete beam.

  17. A reliable method for intracranial electrode implantation and chronic electrical stimulation in the mouse brain.

    Science.gov (United States)

    Jeffrey, Melanie; Lang, Min; Gane, Jonathan; Wu, Chiping; Burnham, W McIntyre; Zhang, Liang

    2013-08-06

    Electrical stimulation of brain structures has been widely used in rodent models for kindling or modeling deep brain stimulation used clinically. This requires surgical implantation of intracranial electrodes and subsequent chronic stimulation in individual animals for several weeks. Anchoring screws and dental acrylic have long been used to secure implanted intracranial electrodes in rats. However, such an approach is limited when carried out in mouse models as the thin mouse skull may not be strong enough to accommodate the anchoring screws. We describe here a screw-free, glue-based method for implanting bipolar stimulating electrodes in the mouse brain and validate this method in a mouse model of hippocampal electrical kindling. Male C57 black mice (initial ages of 6-8 months) were used in the present experiments. Bipolar electrodes were implanted bilaterally in the hippocampal CA3 area for electrical stimulation and electroencephalographic recordings. The electrodes were secured onto the skull via glue and dental acrylic but without anchoring screws. A daily stimulation protocol was used to induce electrographic discharges and motor seizures. The locations of implanted electrodes were verified by hippocampal electrographic activities and later histological assessments. Using the glue-based implantation method, we implanted bilateral bipolar electrodes in 25 mice. Electrographic discharges and motor seizures were successfully induced via hippocampal electrical kindling. Importantly, no animal encountered infection in the implanted area or a loss of implanted electrodes after 4-6 months of repetitive stimulation/recording. We suggest that the glue-based, screw-free method is reliable for chronic brain stimulation and high-quality electroencephalographic recordings in mice. The technical aspects described this study may help future studies in mouse models.

  18. Bridging Human Reliability Analysis and Psychology, Part 1: The Psychological Literature Review for the IDHEAS Method

    Energy Technology Data Exchange (ETDEWEB)

    April M. Whaley; Stacey M. L. Hendrickson; Ronald L. Boring; Jeffrey C. Joe; Katya L. Le Blanc; Jing Xing

    2012-06-01

    In response to Staff Requirements Memorandum (SRM) SRM-M061020, the U.S. Nuclear Regulatory Commission (NRC) is sponsoring work to update the technical basis underlying human reliability analysis (HRA) in an effort to improve the robustness of HRA. The ultimate goal of this work is to develop a hybrid of existing methods addressing limitations of current HRA models and in particular issues related to intra- and inter-method variabilities and results. This hybrid method is now known as the Integrated Decision-tree Human Event Analysis System (IDHEAS). Existing HRA methods have looked at elements of the psychological literature, but there has not previously been a systematic attempt to translate the complete span of cognition from perception to action into mechanisms that can inform HRA. Therefore, a first step of this effort was to perform a literature search of psychology, cognition, behavioral science, teamwork, and operating performance to incorporate current understanding of human performance in operating environments, thus affording an improved technical foundation for HRA. However, this literature review went one step further by mining the literature findings to establish causal relationships and explicit links between the different types of human failures, performance drivers and associated performance measures ultimately used for quantification. This is the first of two papers that detail the literature review (paper 1) and its product (paper 2). This paper describes the literature review and the high-level architecture used to organize the literature review, and the second paper (Whaley, Hendrickson, Boring, & Xing, these proceedings) describes the resultant cognitive framework.

  19. On method of solving third-order ordinary differential equations directly using Bernstein polynomials

    Science.gov (United States)

    Khataybeh, S. N.; Hashim, I.

    2018-04-01

    In this paper, we propose for the first time a method based on Bernstein polynomials for solving directly a class of third-order ordinary differential equations (ODEs). This method gives a numerical solution by converting the equation into a system of algebraic equations which is solved directly. Some numerical examples are given to show the applicability of the method.

  20. Variable High Order Multiblock Overlapping Grid Methods for Mixed Steady and Unsteady Multiscale Viscous Flows

    Science.gov (United States)

    Sjogreen, Bjoern; Yee, H. C.

    2007-01-01

    Flows containing steady or nearly steady strong shocks in parts of the flow field, and unsteady turbulence with shocklets on other parts of the flow field are difficult to capture accurately and efficiently employing the same numerical scheme even under the multiblock grid or adaptive grid refinement framework. On one hand, sixth-order or higher shock-capturing methods are appropriate for unsteady turbulence with shocklets. On the other hand, lower order shock-capturing methods are more effective for strong steady shocks in terms of convergence. In order to minimize the shortcomings of low order and high order shock-capturing schemes for the subject flows,a multi- block overlapping grid with different orders of accuracy on different blocks is proposed. Test cases to illustrate the performance of the new solver are included.

  1. Reliability of a new method for measuring coronal trunk imbalance, the axis-line-angle technique.

    Science.gov (United States)

    Zhang, Rui-Fang; Liu, Kun; Wang, Xue; Liu, Qian; He, Jia-Wei; Wang, Xiang-Yang; Yan, Zhi-Han

    2015-12-01

    Accurate determination of the extent of trunk imbalance in the coronal plane plays a key role in an evaluation of patients with trunk imbalance, such as patients with adolescent idiopathic scoliosis. An established, widely used practice in evaluating trunk imbalance is to drop a plumb line from the C7 vertebra to a key reference axis, the central sacral vertical line (CSVL) in full-spine standing anterioposterior radiographs, and measuring the distance between them, the C7-CSVL. However, measuring the CSVL is subject to intraobserver differences, is error-prone, and is of poor reliability. Therefore, the development of a different way to measure trunk imbalance is needed. This study aimed to describe a new method to measure coronal trunk imbalance, the axis-line-angle technique (ALAT), which measures the angle at the intersection between the C7 plumb line and an axis line drawn from the vertebral centroid of the C7 to the middle of the superior border of the symphysis pubis, and to compare the reliability of the ALAT with that of the C7-CSVL. A prospective study at a university hospital was used. The patient sample consisted of sixty-nine consecutively enrolled men and women patients, aged 10-18 years, who had trunk imbalance defined as C7-CSVL longer than 20 mm on computed full-spine standing anterioposterior radiographs. Data were analyzed to determine the correlation between C7-CSVL and ALAT measurements and to determine intraobserver and interobserver reliabilities. Using a picture archiving and communication system, three radiologists independently evaluated trunk imbalance on the 69 computed radiographs by measuring the C7-CSVL and by measuring the angle determined by the ALAT. Data were analyzed to determine the correlations between the two measures of trunk imbalance, and to determine intraobserver and interobserver reliabilities of each of them. Overall results from the measurements by the C7-CSVL and the ALAT were significantly moderately correlated

  2. A Modified AH-FDTD Unconditionally Stable Method Based on High-Order Algorithm

    Directory of Open Access Journals (Sweden)

    Zheng Pan

    2017-01-01

    Full Text Available The unconditionally stable method, Associated-Hermite FDTD, has attracted more and more attentions in computational electromagnetic for its time-frequency compact property. Because of the fewer orders of AH basis needed in signal reconstruction, the computational efficiency can be improved further. In order to further improve the accuracy of the traditional AH-FDTD, a high-order algorithm is introduced. Using this method, the dispersion error induced by the space grid can be reduced, which makes it possible to set coarser grid. The simulation results show that, on the condition of coarse grid, the waveforms obtained from the proposed method are matched well with the analytic result, and the accuracy of the proposed method is higher than the traditional AH-FDTD. And the efficiency of the proposed method is higher than the traditional FDTD method in analysing 2D waveguide problems with fine-structure.

  3. Aligning Order Picking Methods, Incentive Systems, and Regulatory Focus to Increase Performance

    NARCIS (Netherlands)

    de Vries, J.; de Koster, R.; Stam, D.

    2016-01-01

    A controlled field experiment investigates order picking performance in terms of productivity. We examined three manual picker-to-parts order picking methods (parallel, zone, and dynamic zone picking) under two different incentive systems (competition-based vs. cooperation-based) for pickers with

  4. Aligning order picking methods, incentive systems, and regulatory focus to increase performance

    NARCIS (Netherlands)

    J. de Vries (Jelle); M.B.M. de Koster (René); D.A. Stam (Daan)

    2015-01-01

    textabstractA unique controlled field experiment investigates order picking performance (in terms of productivity and quality). We examined three manual picker-to-parts order picking methods (parallel, zone, and dynamic zone picking) under two different incentive systems (competition- based versus

  5. A stochastic collocation method for the second order wave equation with a discontinuous random speed

    KAUST Repository

    Motamed, Mohammad; Nobile, Fabio; Tempone, Raul

    2012-01-01

    In this paper we propose and analyze a stochastic collocation method for solving the second order wave equation with a random wave speed and subjected to deterministic boundary and initial conditions. The speed is piecewise smooth in the physical

  6. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    Science.gov (United States)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  7. An Equal-Order DG Method for the Incompressible Navier-Stokes Equations

    KAUST Repository

    Cockburn, Bernardo; Kanschat, Guido; Schö tzau, Dominik

    2008-01-01

    We introduce and analyze a discontinuous Galerkin method for the incompressible Navier-Stokes equations that is based on finite element spaces of the same polynomial order for the approximation of the velocity and the pressure. Stability

  8. High-order multi-implicit spectral deferred correction methods for problems of reactive flow

    International Nuclear Information System (INIS)

    Bourlioux, Anne; Layton, Anita T.; Minion, Michael L.

    2003-01-01

    Models for reacting flow are typically based on advection-diffusion-reaction (A-D-R) partial differential equations. Many practical cases correspond to situations where the relevant time scales associated with each of the three sub-processes can be widely different, leading to disparate time-step requirements for robust and accurate time-integration. In particular, interesting regimes in combustion correspond to systems in which diffusion and reaction are much faster processes than advection. The numerical strategy introduced in this paper is a general procedure to account for this time-scale disparity. The proposed methods are high-order multi-implicit generalizations of spectral deferred correction methods (MISDC methods), constructed for the temporal integration of A-D-R equations. Spectral deferred correction methods compute a high-order approximation to the solution of a differential equation by using a simple, low-order numerical method to solve a series of correction equations, each of which increases the order of accuracy of the approximation. The key feature of MISDC methods is their flexibility in handling several sub-processes implicitly but independently, while avoiding the splitting errors present in traditional operator-splitting methods and also allowing for different time steps for each process. The stability, accuracy, and efficiency of MISDC methods are first analyzed using a linear model problem and the results are compared to semi-implicit spectral deferred correction methods. Furthermore, numerical tests on simplified reacting flows demonstrate the expected convergence rates for MISDC methods of orders three, four, and five. The gain in efficiency by independently controlling the sub-process time steps is illustrated for nonlinear problems, where reaction and diffusion are much stiffer than advection. Although the paper focuses on this specific time-scales ordering, the generalization to any ordering combination is straightforward

  9. A reliable and economical method for gaining mouse embryonic fibroblasts capable of preparing feeder layers.

    Science.gov (United States)

    Jiang, Guangming; Wan, Xiaoju; Wang, Ming; Zhou, Jianhua; Pan, Jian; Wang, Baolong

    2016-08-01

    Mouse embryonic fibroblasts (MEFs) are widely used to prepare feeder layers for culturing embryonic stem cells (ESCs) or induced pluripotent stem cells (iPSCs) in vitro. Transportation lesions and exorbitant prices make the commercially obtained MEFs unsuitable for long term research. The aim of present study is to establish a method, which enables researchers to gain MEFs from mice and establish feeder layers by themselves in ordinary laboratories. MEFs were isolated from ICR mouse embryos at 12.5-17.5 day post-coitum (DPC) and cultured in vitro. At P2-P7, the cells were inactivated with mitomycin C or by X-ray irradiation. Then they were used to prepare feeder layers. The key factors of the whole protocol were analyzed to determine the optimal conditions for the method. The results revealed MEFs isolated at 12.5-13.5 DPC, and cultured to P3 were the best choice for feeder preparation, those P2 and P4-P5 MEFs were also suitable for the purpose. The P3-P5 MEFs treated with 10 μg/ml of mitomycin C for 3 h, or irradiated with X-ray at 1.5 Gy/min for 25 Gy were the most suitable feeder cells. Treating MEFs with 10 μg/ml of mitomycin C for 2.5 h, 15 μg/ml for 2.0 h, or irradiating the cells with 20 Gy of X-ray at 2.0 Gy/min could all serve as alternative methods for P3-P4 cells. Our study provides a reliable and economical way to obtain large amount of qualified MEFs for long term research of ESCs or iPSCs.

  10. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    Science.gov (United States)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  11. Mixed first- and second-order transport method using domain decomposition techniques for reactor core calculations

    International Nuclear Information System (INIS)

    Girardi, E.; Ruggieri, J.M.

    2003-01-01

    The aim of this paper is to present the last developments made on a domain decomposition method applied to reactor core calculations. In this method, two kind of balance equation with two different numerical methods dealing with two different unknowns are coupled. In the first part the two balance transport equations (first order and second order one) are presented with the corresponding following numerical methods: Variational Nodal Method and Discrete Ordinate Nodal Method. In the second part, the Multi-Method/Multi-Domain algorithm is introduced by applying the Schwarz domain decomposition to the multigroup eigenvalue problem of the transport equation. The resulting algorithm is then provided. The projection operators used to coupled the two methods are detailed in the last part of the paper. Finally some preliminary numerical applications on benchmarks are given showing encouraging results. (authors)

  12. Symplectic and trigonometrically fitted symplectic methods of second and third order

    International Nuclear Information System (INIS)

    Monovasilis, Th.; Simos, T.E.

    2006-01-01

    The numerical integration of Hamiltonian systems by symplectic and trigonometrically symplectic method is considered in this Letter. We construct new symplectic and trigonometrically symplectic methods of second and third order. We apply our new methods as well as other existing methods to the numerical integration of the harmonic oscillator, the 2D harmonic oscillator with an integer frequency ratio and an orbit problem studied by Stiefel and Bettis

  13. Using high-order methods on adaptively refined block-structured meshes - discretizations, interpolations, and filters.

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.

    2006-01-01

    Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.

  14. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  15. A reliable morphological method to assess the age of male Anopheles gambiae

    Directory of Open Access Journals (Sweden)

    Killeen Gerry F

    2006-07-01

    Full Text Available Abstract Background Release of genetically-modified (GM or sterile male mosquitoes for malaria control is hampered by inability to assess the age and mating history of free-living male Anopheles. Methods Age and mating-related changes in the reproductive system of male Anopheles gambiae were quantified and used to fit predictive statistical models. These models, based on numbers of spermatocysts, relative size of sperm reservoir and presence/absence of a clear area around the accessory gland, were evaluated using an independent sample of mosquitoes whose status was blinded during the experiment. Results The number of spermatocysts in male testes decreased with age, and the relative size of their sperm reservoir increased. The presence of a clear area around accessory glands was also linked to age and mating status. A quantitative model was able to categorize males from the blind trial into age groups of young (≤ 4 days and old (> 4 days with an overall efficiency of 89%. Using the parameters of this model, a simple table was compiled that can be used to predict male age. In contrast, mating history could not be reliably assessed as virgins could not be distinguished from mated males. Conclusion Simple assessment of a few morphological traits which are easily collected in the field allows accurate age-grading of male An. gambiae. This simple, yet robust, model enables evaluation of demographic patterns and mortality in wild and released males in populations targeted by GM or sterile male-based control programmes.

  16. Method and apparatus for a nuclear reactor for increasing reliability to scram control elements

    International Nuclear Information System (INIS)

    Bevilacqua, F.

    1976-01-01

    A description is given of a method and apparatus for increasing the reliability of linear drive devices of a nuclear reactor to scram the control elements held in a raised position thereby. Each of the plurality of linear drive devices includes a first type of holding means associated with the drive means of the linear drive device and a second type of holding means distinct and operatively dissimilar from the first type. The system of linear drive devices having both types of holding means are operated in such a manner that the control elements of a portion of the linear drive devices are only held in a raised position by the first holding means and the control elements of the remaining portion of linear drive devices are held in a raised position by only the second type of holding means. Since the two types of holding means are distinct from one another and are operatively dissimilar, the probability of failure of both systems to scram as a result of common mode failure will be minimized. Means may be provided to positively detect disengagement of the first type of holding means and engagement of the second type of holding means for those linear drive devices being operative to hold the control elements in a raised position with the second type of holding means

  17. Small metal soft tissue foreign body extraction by using 3D CT guidance: A reliable method

    International Nuclear Information System (INIS)

    Tao, Kai; Xu, Sen; Liu, Xiao-yan; Liang, Jiu-long; Qiu, Tao; Tan, Jia-nan; Che, Jian-hua; Wang, Zi-hua

    2012-01-01

    Objective: To introduce a useful and accurate technique for the locating and removal of small metal foreign bodies in the soft tissues. Methods: Eight patients presented with suspected small metal foreign bodies retained in the soft tissues of various body districts. Under local anesthesia, 3–6 pieces of 5 ml syringe needles or 1 ml syringe needles were induced through three different planes around the entry point of the foreign bodies. Using these finders, the small metal FBs were confirmed under 3D CT guidance. Based on the CT findings, the soft tissues were dissected along the path of the closest needle and the FBs were easily found and removed according to the relation with the closest needle finder. Results: Eight metal foreign bodies (3 slices, 3 nails, 1 fish hook, 1 needlepoint) were successfully removed under 3D CT guidance in all patients. The procedures took between 35 min and 50 min and the operation times took between 15 min and 25 min. No complications arose after the treatment. Conclusion: 3D CT-guided technique is a good alternative for the removal of small metal foreign body retained in the soft tissues as it is relatively accurate, reliable, quick, carries a low risk of complications and can be a first-choice procedure for the extraction of small metal foreign body.

  18. Methods for studying short-range order in solid binary solutions

    International Nuclear Information System (INIS)

    Beranger, Gerard

    1969-12-01

    The short range order definition and its characteristic parameters are first recalled. The different methods to study the short range order are then examined: X ray diffusion, electrical resistivity, specific heat and thermoelectric power, neutron diffraction, electron spin resonance, study of thermodynamic and mechanical properties. The theory of the X ray diffraction effects due to short range order and the subsequent experimental method are emphasized. The principal results obtained from binary Systems, by the different experimental techniques, are reported and briefly discussed. The Au-Cu, Li-Mg, Au-Ni and Cu-Zn Systems are moreover described. (author) [fr

  19. Higher-order schemes for the Laplace transformation method for parabolic problems

    KAUST Repository

    Douglas, C.

    2011-01-01

    In this paper we solve linear parabolic problems using the three stage noble algorithms. First, the time discretization is approximated using the Laplace transformation method, which is both parallel in time (and can be in space, too) and extremely high order convergent. Second, higher-order compact schemes of order four and six are used for the the spatial discretization. Finally, the discretized linear algebraic systems are solved using multigrid to show the actual convergence rate for numerical examples, which are compared to other numerical solution methods. © 2011 Springer-Verlag.

  20. High-order FDTD methods via derivative matching for Maxwell's equations with material interfaces

    International Nuclear Information System (INIS)

    Zhao Shan; Wei, G.W.

    2004-01-01

    This paper introduces a series of novel hierarchical implicit derivative matching methods to restore the accuracy of high-order finite-difference time-domain (FDTD) schemes of computational electromagnetics (CEM) with material interfaces in one (1D) and two spatial dimensions (2D). By making use of fictitious points, systematic approaches are proposed to locally enforce the physical jump conditions at material interfaces in a preprocessing stage, to arbitrarily high orders of accuracy in principle. While often limited by numerical instability, orders up to 16 and 12 are achieved, respectively, in 1D and 2D. Detailed stability analyses are presented for the present approach to examine the upper limit in constructing embedded FDTD methods. As natural generalizations of the high-order FDTD schemes, the proposed derivative matching methods automatically reduce to the standard FDTD schemes when the material interfaces are absent. An interesting feature of the present approach is that it encompasses a variety of schemes of different orders in a single code. Another feature of the present approach is that it can be robustly implemented with other high accuracy time-domain approaches, such as the multiresolution time-domain method and the local spectral time-domain method, to cope with material interfaces. Numerical experiments on both 1D and 2D problems are carried out to test the convergence, examine the stability, access the efficiency, and explore the limitation of the proposed methods. It is found that operating at their best capacity, the proposed high-order schemes could be over 2000 times more efficient than their fourth-order versions in 2D. In conclusion, the present work indicates that the proposed hierarchical derivative matching methods might lead to practical high-order schemes for numerical solution of time-domain Maxwell's equations with material interfaces

  1. LOO: a low-order nonlinear transport scheme for acceleration of method of characteristics

    International Nuclear Information System (INIS)

    Li, Lulu; Smith, Kord; Forget, Benoit; Ferrer, Rodolfo

    2015-01-01

    This paper presents a new physics-based multi-grid nonlinear acceleration method: the low-order operator method, or LOO. LOO uses a coarse space-angle multi-group method of characteristics (MOC) neutron transport calculation to accelerate the fine space-angle MOC calculation. LOO is designed to capture more angular effects than diffusion-based acceleration methods through a transport-based low-order solver. LOO differs from existing transport-based acceleration schemes in that it emphasizes simplified coarse space-angle characteristics and preserves physics in quadrant phase-space. The details of the method, including the restriction step, the low-order iterative solver and the prolongation step are discussed in this work. LOO shows comparable convergence behavior to coarse mesh finite difference on several two-dimensional benchmark problems while not requiring any under-relaxation, making it a robust acceleration scheme. (author)

  2. Overlay control methodology comparison: field-by-field and high-order methods

    Science.gov (United States)

    Huang, Chun-Yen; Chiu, Chui-Fu; Wu, Wen-Bin; Shih, Chiang-Lin; Huang, Chin-Chou Kevin; Huang, Healthy; Choi, DongSub; Pierson, Bill; Robinson, John C.

    2012-03-01

    Overlay control in advanced integrated circuit (IC) manufacturing is becoming one of the leading lithographic challenges in the 3x and 2x nm process nodes. Production overlay control can no longer meet the stringent emerging requirements based on linear composite wafer and field models with sampling of 10 to 20 fields and 4 to 5 sites per field, which was the industry standard for many years. Methods that have emerged include overlay metrology in many or all fields, including the high order field model method called high order control (HOC), and field by field control (FxFc) methods also called correction per exposure. The HOC and FxFc methods were initially introduced as relatively infrequent scanner qualification activities meant to supplement linear production schemes. More recently, however, it is clear that production control is also requiring intense sampling, similar high order and FxFc methods. The added control benefits of high order and FxFc overlay methods need to be balanced with the increased metrology requirements, however, without putting material at risk. Of critical importance is the proper control of edge fields, which requires intensive sampling in order to minimize signatures. In this study we compare various methods of overlay control including the performance levels that can be achieved.

  3. Hybrid High-Order methods for finite deformations of hyperelastic materials

    Science.gov (United States)

    Abbas, Mickaël; Ern, Alexandre; Pignet, Nicolas

    2018-01-01

    We devise and evaluate numerically Hybrid High-Order (HHO) methods for hyperelastic materials undergoing finite deformations. The HHO methods use as discrete unknowns piecewise polynomials of order k≥1 on the mesh skeleton, together with cell-based polynomials that can be eliminated locally by static condensation. The discrete problem is written as the minimization of a broken nonlinear elastic energy where a local reconstruction of the displacement gradient is used. Two HHO methods are considered: a stabilized method where the gradient is reconstructed as a tensor-valued polynomial of order k and a stabilization is added to the discrete energy functional, and an unstabilized method which reconstructs a stable higher-order gradient and circumvents the need for stabilization. Both methods satisfy the principle of virtual work locally with equilibrated tractions. We present a numerical study of the two HHO methods on test cases with known solution and on more challenging three-dimensional test cases including finite deformations with strong shear layers and cavitating voids. We assess the computational efficiency of both methods, and we compare our results to those obtained with an industrial software using conforming finite elements and to results from the literature. The two HHO methods exhibit robust behavior in the quasi-incompressible regime.

  4. Wind turbine performance: Methods and criteria for reliability of measured power curves

    Energy Technology Data Exchange (ETDEWEB)

    Griffin, D.A. [Advanced Wind Turbines Inc., Seattle, WA (United States)

    1996-12-31

    In order to evaluate the performance of prototype turbines, and to quantify incremental changes in performance through field testing, Advanced Wind Turbines (AWT) has been developing methods and requirements for power curve measurement. In this paper, field test data is used to illustrate several issues and trends which have resulted from this work. Averaging and binning processes, data hours per wind-speed bin, wind turbulence levels, and anemometry methods are all shown to have significant impacts on the resulting power curves. Criteria are given by which the AWT power curves show a high degree of repeatability, and these criteria are compared and contrasted with current published standards for power curve measurement. 6 refs., 5 figs., 5 tabs.

  5. DNA Barcoding as a Reliable Method for the Authentication of Commercial Seafood Products

    Directory of Open Access Journals (Sweden)

    Silvia Nicolè

    2012-01-01

    Full Text Available Animal DNA barcoding allows researchers to identify different species by analyzing a short nucleotide sequence, typically the mitochondrial gene cox1. In this paper, we use DNA barcoding to genetically identify seafood samples that were purchased from various locations throughout Italy. We adopted a multi-locus approach to analyze the cob, 16S-rDNA and cox1 genes, and compared our sequences to reference sequences in the BOLD and GenBank online databases. Our method is a rapid and robust technique that can be used to genetically identify crustaceans, mollusks and fishes. This approach could be applied in the future for conservation, particularly for monitoring illegal trade of protected and endangered species. Additionally, this method could be used for authentication in order to detect mislabeling of commercially processed seafood.

  6. Reliability evaluation of I-123 ADAM SPECT imaging using SPM software and AAL ROI methods

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Bang-Hung [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Department of Nuclear Medicine, Taipei Veterans General Hospital, Taiwan (China); Tsai, Sung-Yi [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Department of Imaging Medical, St.Martin De Porres Hospital, Chia-Yi, Taiwan (China); Wang, Shyh-Jen [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Department of Nuclear Medicine, Taipei Veterans General Hospital, Taiwan (China); Su, Tung-Ping; Chou, Yuan-Hwa [Department of Psychiatry, Taipei Veterans General Hospital, Taipei, Taiwan (China); Chen, Chia-Chieh [Institute of Nuclear Energy Research, Longtan, Taiwan (China); Chen, Jyh-Cheng, E-mail: jcchen@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China)

    2011-08-21

    The level of serotonin was regulated by serotonin transporter (SERT), which is a decisive protein in regulation of serotonin neurotransmission system. Many psychiatric disorders and therapies were also related to concentration of cerebral serotonin. I-123 ADAM was the novel radiopharmaceutical to image SERT in brain. The aim of this study was to measure reliability of SERT densities of healthy volunteers by automated anatomical labeling (AAL) method. Furthermore, we also used statistic parametric mapping (SPM) on a voxel by voxel analysis to find difference of cortex between test and retest of I-123 ADAM single photon emission computed tomography (SPECT) images. Twenty-one healthy volunteers were scanned twice with SPECT at 4 h after intravenous administration of 185 MBq of {sup 123}I-ADAM. The image matrix size was 128x128 and pixel size was 3.9 mm. All images were obtained through filtered back-projection (FBP) reconstruction algorithm. Region of interest (ROI) definition was performed based on the AAL brain template in PMOD version 2.95 software package. ROI demarcations were placed on midbrain, pons, striatum, and cerebellum. All images were spatially normalized to the SPECT MNI (Montreal Neurological Institute) templates supplied with SPM2. And each image was transformed into standard stereotactic space, which was matched to the Talairach and Tournoux atlas. Then differences across scans were statistically estimated on a voxel by voxel analysis using paired t-test (population main effect: 2 cond's, 1 scan/cond.), which was applied to compare concentration of SERT between the test and retest cerebral scans. The average of specific uptake ratio (SUR: target/cerebellum-1) of {sup 123}I-ADAM binding to SERT in midbrain was 1.78{+-}0.27, pons was 1.21{+-}0.53, and striatum was 0.79{+-}0.13. The cronbach's {alpha} of intra-class correlation coefficient (ICC) was 0.92. Besides, there was also no significant statistical finding in cerebral area using SPM2

  7. Powernet profiling. A method for the development of reliable vehicle powernets; Powernet Profiling. Eine Methode zur Entwicklung sicherer Energiebordnetze

    Energy Technology Data Exchange (ETDEWEB)

    Mathar, Sebastian; Lammermann, Matthias [Inst. fuer Kraftfahrzeuge der RWTH Aachen (Germany); Viscido, Toni [fka - Forschungsgesellschaft Kraftfahrwesen mbH, Aachen (Germany)

    2008-07-01

    This publication shows the influences of innovative mild-hybrid functions on the design of neuralgic powernet components. It is pointed out which special demands need to be fulfilled by the energy storage devices and the alternator when introducing novel powernet measures. The consequences on the electrical vehicle powernet are derived and it is shown, which countermeasures can be taken in order to avoid negative influences on the customer as far as possible. This paper demonstrates, how Powernet Profiling can be used for these purposes as an integral method for the development and design of safe vehicle powernets. (orig.)

  8. The response analysis of fractional-order stochastic system via generalized cell mapping method.

    Science.gov (United States)

    Wang, Liang; Xue, Lili; Sun, Chunyan; Yue, Xiaole; Xu, Wei

    2018-01-01

    This paper is concerned with the response of a fractional-order stochastic system. The short memory principle is introduced to ensure that the response of the system is a Markov process. The generalized cell mapping method is applied to display the global dynamics of the noise-free system, such as attractors, basins of attraction, basin boundary, saddle, and invariant manifolds. The stochastic generalized cell mapping method is employed to obtain the evolutionary process of probability density functions of the response. The fractional-order ϕ 6 oscillator and the fractional-order smooth and discontinuous oscillator are taken as examples to give the implementations of our strategies. Studies have shown that the evolutionary direction of the probability density function of the fractional-order stochastic system is consistent with the unstable manifold. The effectiveness of the method is confirmed using Monte Carlo results.

  9. Physical principles underlying the experimental methods for studying the orientational order of liquid crystals

    International Nuclear Information System (INIS)

    Limmer, S.

    1989-01-01

    The basic physical principles underlying different experimental methods frequently used for the determination of orientational order parameters of liquid crystals are reviewed. The methods that are dealt with here include the anisotropy of the diamagnetic susceptibility, birefringence, linear dichroism, Raman scattering, fluorescence depolarization, electron paramagnetic resonance (EPR), and nuclear magnetic resonance (NMR). The fundamental assertions that can be obtained by the different methods as well as their advantages, drawbacks and limitations are inspected. Typical sources of uncertainties and inaccuracies are discussed. To quantitatively evaluate the experimental data with reference to the orientational order the general tensor formalism developed by Schmiedel was employed throughout according to which the order matrix comprises 25 real elements yet. Within this context the interplay of orientational ordering and molecular conformation is scrutinized. (author)

  10. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    Science.gov (United States)

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  11. Development of a Method for Quantifying the Reliability of Nuclear Safety-Related Software

    Energy Technology Data Exchange (ETDEWEB)

    Yi Zhang; Michael W. Golay

    2003-10-01

    The work of our project is intended to help introducing digital technologies into nuclear power into nuclear power plant safety related software applications. In our project we utilize a combination of modern software engineering methods: design process discipline and feedback, formal methods, automated computer aided software engineering tools, automatic code generation, and extensive feasible structure flow path testing to improve software quality. The tactics include ensuring that the software structure is kept simple, permitting routine testing during design development, permitting extensive finished product testing in the input data space of most likely service and using test-based Bayesian updating to estimate the probability that a random software input will encounter an error upon execution. From the results obtained the software reliability can be both improved and its value estimated. Hopefully our success in the project's work can aid the transition of the nuclear enterprise into the modern information world. In our work, we have been using the proprietary sample software, the digital Signal Validation Algorithm (SVA), provided by Westinghouse. Also our work is being done with their collaboration. The SVA software is used for selecting the plant instrumentation signal set which is to be used as the input the digital Plant Protection System (PPS). This is the system that automatically decides whether to trip the reactor. In our work, we are using -001 computer assisted software engineering (CASE) tool of Hamilton Technologies Inc. This tool is capable of stating the syntactic structure of a program reflecting its state requirements, logical functions and data structure.

  12. Condition-based fault tree analysis (CBFTA): A new method for improved fault tree analysis (FTA), reliability and safety calculations

    International Nuclear Information System (INIS)

    Shalev, Dan M.; Tiran, Joseph

    2007-01-01

    Condition-based maintenance methods have changed systems reliability in general and individual systems in particular. Yet, this change does not affect system reliability analysis. System fault tree analysis (FTA) is performed during the design phase. It uses components failure rates derived from available sources as handbooks, etc. Condition-based fault tree analysis (CBFTA) starts with the known FTA. Condition monitoring (CM) methods applied to systems (e.g. vibration analysis, oil analysis, electric current analysis, bearing CM, electric motor CM, and so forth) are used to determine updated failure rate values of sensitive components. The CBFTA method accepts updated failure rates and applies them to the FTA. The CBFTA recalculates periodically the top event (TE) failure rate (λ TE ) thus determining the probability of system failure and the probability of successful system operation-i.e. the system's reliability. FTA is a tool for enhancing system reliability during the design stages. But, it has disadvantages, mainly it does not relate to a specific system undergoing maintenance. CBFTA is tool for updating reliability values of a specific system and for calculating the residual life according to the system's monitored conditions. Using CBFTA, the original FTA is ameliorated to a practical tool for use during the system's field life phase, not just during system design phase. This paper describes the CBFTA method and its advantages are demonstrated by an example

  13. Reliability determination of aluminium electrolytic capacitors by the mean of various methods application to the protection system of the LHC

    CERN Document Server

    Perisse, F; Rojat, G

    2004-01-01

    The lifetime of power electronic components is often calculated from reliability reports, but this method can be discussed. We compare in this article the results of various reliability reports to an accelerated ageing test of component and introduced the load-strength concept. Large aluminium electrolytic capacitors are taken here in example in the context of the protection system of LHC (Large Hadron Collider) in CERN where the level of reliability is essential. We notice important differences of MTBF (Mean Time Between Failure) according to the reliability report used. Accelerating ageing tests carried out prove that a Weibull law is more adapted to determinate failure rates of components. The load-strength concept associated with accelerated ageing tests can be a solution to determine the lifetime of power electronic components.

  14. Weak Second Order Explicit Stabilized Methods for Stiff Stochastic Differential Equations

    KAUST Repository

    Abdulle, Assyr; Vilmart, Gilles; Zygalakis, Konstantinos C.

    2013-01-01

    We introduce a new family of explicit integrators for stiff Itô stochastic differential equations (SDEs) of weak order two. These numerical methods belong to the class of one-step stabilized methods with extended stability domains and do not suffer

  15. A Three Step Explicit Method for Direct Solution of Third Order ...

    African Journals Online (AJOL)

    This study produces a three step discrete Linear Multistep Method for Direct solution of third order initial value problems of ordinary differential equations of the form y'''= f(x,y,y',y''). Taylor series expansion technique was adopted in the development of the method. The differential system from the basis polynomial function to ...

  16. Modulating functions method for parameters estimation in the fifth order KdV equation

    KAUST Repository

    Asiri, Sharefa M.

    2017-07-25

    In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a system of linear algebraic equations of the unknowns. The statistical properties of the modulating functions solution are described in this paper. In addition, guidelines for choosing the number of modulating functions, which is an important design parameter, are provided. The effectiveness and robustness of the proposed method are shown through numerical simulations in both noise-free and noisy cases.

  17. An Iterative Regularization Method for Identifying the Source Term in a Second Order Differential Equation

    Directory of Open Access Journals (Sweden)

    Fairouz Zouyed

    2015-01-01

    Full Text Available This paper discusses the inverse problem of determining an unknown source in a second order differential equation from measured final data. This problem is ill-posed; that is, the solution (if it exists does not depend continuously on the data. In order to solve the considered problem, an iterative method is proposed. Using this method a regularized solution is constructed and an a priori error estimate between the exact solution and its regularized approximation is obtained. Moreover, numerical results are presented to illustrate the accuracy and efficiency of this method.

  18. Reliability Assessment Method of Reactor Protection System Software by Using V and Vbased Bayesian Nets

    International Nuclear Information System (INIS)

    Eom, H. S.; Park, G. Y.; Kang, H. G.; Son, H. S.

    2010-07-01

    Developed a methodology which can be practically used in quantitative reliability assessment of a safety c ritical software for a protection system of nuclear power plants. The base of the proposed methodology is V and V being used in the nuclear industry, which means that it is not affected with specific software development environments or parameters that are necessary for the reliability calculation. Modular and formal sub-BNs in the proposed methodology is useful tool to constitute the whole BN model for reliability assessment of a target software. The proposed V and V based BN model estimates the defects in the software according to the performance of V and V results and then calculate reliability of the software. A case study was carried out to validate the proposed methodology. The target software is the RPS SW which was developed by KNICS project

  19. High-order dynamic lattice method for seismic simulation in anisotropic media

    Science.gov (United States)

    Hu, Xiaolin; Jia, Xiaofeng

    2018-03-01

    The discrete particle-based dynamic lattice method (DLM) offers an approach to simulate elastic wave propagation in anisotropic media by calculating the anisotropic micromechanical interactions between these particles based on the directions of the bonds that connect them in the lattice. To build such a lattice, the media are discretized into particles. This discretization inevitably leads to numerical dispersion. The basic lattice unit used in the original DLM only includes interactions between the central particle and its nearest neighbours; therefore, it represents the first-order form of a particle lattice. The first-order lattice suffers from numerical dispersion compared with other numerical methods, such as high-order finite-difference methods, in terms of seismic wave simulation. Due to its unique way of discretizing the media, the particle-based DLM no longer solves elastic wave equations; this means that one cannot build a high-order DLM by simply creating a high-order discrete operator to better approximate a partial derivative operator. To build a high-order DLM, we carry out a thorough dispersion analysis of the method and discover that by adding more neighbouring particles into the lattice unit, the DLM will yield different spatial accuracy. According to the dispersion analysis, the high-order DLM presented here can adapt the requirement of spatial accuracy for seismic wave simulations. For any given spatial accuracy, we can design a corresponding high-order lattice unit to satisfy the accuracy requirement. Numerical tests show that the high-order DLM improves the accuracy of elastic wave simulation in anisotropic media.

  20. A Method for The Assessing of Reliability Characteristics Relevant to an Assumed Position-Fixing Accuracy in Navigational Positioning Systems

    Directory of Open Access Journals (Sweden)

    Specht Cezary

    2016-09-01

    Full Text Available This paper presents a method which makes it possible to determine reliability characteristics of navigational positioning systems, relevant to an assumed value of permissible error in position fixing. The method allows to calculate: availability , reliability as well as operation continuity of position fixing system for an assumed, determined on the basis of formal requirements - both worldwide and national, position-fixing accuracy. The proposed mathematical model allows to satisfy, by any navigational positioning system, not only requirements as to position-fixing accuracy of a given navigational application (for air , sea or land traffic but also the remaining characteristics associated with technical serviceability of a system.

  1. Reliability and validity in measurement of true humeral retroversion by a three-dimensional cylinder fitting method.

    Science.gov (United States)

    Saka, Masayuki; Yamauchi, Hiroki; Hoshi, Kenji; Yoshioka, Toru; Hamada, Hidetoshi; Gamada, Kazuyoshi

    2015-05-01

    Humeral retroversion is defined as the orientation of the humeral head relative to the distal humerus. Because none of the previous methods used to measure humeral retroversion strictly follow this definition, values obtained by these techniques vary and may be biased by morphologic variations of the humerus. The purpose of this study was 2-fold: to validate a method to define the axis of the distal humerus with a virtual cylinder and to establish the reliability of 3-dimensional (3D) measurement of humeral retroversion by this cylinder fitting method. Humeral retroversion in 14 baseball players (28 humeri) was measured by the 3D cylinder fitting method. The root mean square error was calculated to compare values obtained by a single tester and by 2 different testers using the embedded coordinate system. To establish the reliability, intraclass correlation coefficient (ICC) and precision (standard error of measurement [SEM]) were calculated. The root mean square errors for the humeral coordinate system were reliability and precision of the 3D measurement of retroversion yielded an intratester ICC of 0.99 (SEM, 1.0°) and intertester ICC of 0.96 (SEM, 2.8°). The error in measurements obtained by a distal humerus cylinder fitting method was small enough not to affect retroversion measurement. The 3D measurement of retroversion by this method provides excellent intratester and intertester reliability. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  2. Numerical simulation for fractional order stationary neutron transport equation using Haar wavelet collocation method

    Energy Technology Data Exchange (ETDEWEB)

    Saha Ray, S., E-mail: santanusaharay@yahoo.com; Patra, A.

    2014-10-15

    Highlights: • A stationary transport equation has been solved using the technique of Haar wavelet collocation method. • This paper intends to provide the great utility of Haar wavelets to nuclear science problem. • In the present paper, two-dimensional Haar wavelets are applied. • The proposed method is mathematically very simple, easy and fast. - Abstract: In this paper the numerical solution for the fractional order stationary neutron transport equation is presented using Haar wavelet Collocation Method (HWCM). Haar wavelet collocation method is efficient and powerful in solving wide class of linear and nonlinear differential equations. This paper intends to provide an application of Haar wavelets to nuclear science problems. This paper describes the application of Haar wavelets for the numerical solution of fractional order stationary neutron transport equation in homogeneous medium with isotropic scattering. The proposed method is mathematically very simple, easy and fast. To demonstrate about the efficiency and applicability of the method, two test problems are discussed.

  3. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    Science.gov (United States)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  4. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software in Young People with Down Syndrome.

    Science.gov (United States)

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Rey-Abella, Ferran; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2016-05-01

    People with Down syndrome present skeletal abnormalities in their feet that can be analyzed by commonly used gold standard indices (the Hernández-Corvo index, the Chippaux-Smirak index, the Staheli arch index, and the Clarke angle) based on footprint measurements. The use of Photoshop CS5 software (Adobe Systems Software Ireland Ltd, Dublin, Ireland) to measure footprints has been validated in the general population. The present study aimed to assess the reliability and validity of this footprint assessment technique in the population with Down syndrome. Using optical podography and photography, 44 footprints from 22 patients with Down syndrome (11 men [mean ± SD age, 23.82 ± 3.12 years] and 11 women [mean ± SD age, 24.82 ± 6.81 years]) were recorded in a static bipedal standing position. A blinded observer performed the measurements using a validated manual method three times during the 4-month study, with 2 months between measurements. Test-retest was used to check the reliability of the Photoshop CS5 software measurements. Validity and reliability were obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed very good values for the Photoshop CS5 method (ICC, 0.982-0.995). Validity testing also found no differences between the techniques (ICC, 0.988-0.999). The Photoshop CS5 software method is reliable and valid for the study of footprints in young people with Down syndrome.

  5. Managing Variety in Configure-to-Order Products - An Operational Method

    DEFF Research Database (Denmark)

    Myrodia, Anna; Hvam, Lars

    2014-01-01

    is to develop an operational method to analyze profitability of Configure-To-Order (CTO) products. The operational method consists of a four-step: analysis of product assortment, profitability analysis on configured products, market and competitor analysis and, product assortment scenarios analysis....... The proposed operational method is firstly developed based on both available literature and practitioners experience and subsequently tested on a company that produces CTO products. The results from this application are further discussed and opportunities for further research identified....

  6. A family of high-order gas-kinetic schemes and its comparison with Riemann solver based high-order methods

    Science.gov (United States)

    Ji, Xing; Zhao, Fengxiang; Shyy, Wei; Xu, Kun

    2018-03-01

    Most high order computational fluid dynamics (CFD) methods for compressible flows are based on Riemann solver for the flux evaluation and Runge-Kutta (RK) time stepping technique for temporal accuracy. The advantage of this kind of space-time separation approach is the easy implementation and stability enhancement by introducing more middle stages. However, the nth-order time accuracy needs no less than n stages for the RK method, which can be very time and memory consuming due to the reconstruction at each stage for a high order method. On the other hand, the multi-stage multi-derivative (MSMD) method can be used to achieve the same order of time accuracy using less middle stages with the use of the time derivatives of the flux function. For traditional Riemann solver based CFD methods, the lack of time derivatives in the flux function prevents its direct implementation of the MSMD method. However, the gas kinetic scheme (GKS) provides such a time accurate evolution model. By combining the second-order or third-order GKS flux functions with the MSMD technique, a family of high order gas kinetic methods can be constructed. As an extension of the previous 2-stage 4th-order GKS, the 5th-order schemes with 2 and 3 stages will be developed in this paper. Based on the same 5th-order WENO reconstruction, the performance of gas kinetic schemes from the 2nd- to the 5th-order time accurate methods will be evaluated. The results show that the 5th-order scheme can achieve the theoretical order of accuracy for the Euler equations, and present accurate Navier-Stokes solutions as well due to the coupling of inviscid and viscous terms in the GKS formulation. In comparison with Riemann solver based 5th-order RK method, the high order GKS has advantages in terms of efficiency, accuracy, and robustness, for all test cases. The 4th- and 5th-order GKS have the same robustness as the 2nd-order scheme for the capturing of discontinuous solutions. The current high order MSMD GKS is a

  7. A high-order Petrov-Galerkin method for the Boltzmann transport equation

    International Nuclear Information System (INIS)

    Pain, C.C.; Candy, A.S.; Piggott, M.D.; Buchan, A.; Eaton, M.D.; Goddard, A.J.H.; Oliveira, C.R.E. de

    2005-01-01

    We describe a new Petrov-Galerkin method using high-order terms to introduce dissipation in a residual-free formulation. The method is developed following both a Taylor series analysis and a variational principle, and the result has much in common with traditional Petrov-Galerkin, Self Adjoint Angular Flux (SAAF) and Even Parity forms of the Boltzmann transport equation. In addition, we consider the subtleties in constructing appropriate boundary conditions. In sub-grid scale (SGS) modelling of fluids the advantages of high-order dissipation are well known. Fourth-order terms, for example, are commonly used as a turbulence model with uniform dissipation. They have been shown to have superior properties to SGS models based upon second-order dissipation or viscosity. Even higher-order forms of dissipation (e.g. 16.-order) can offer further advantages, but are only easily realised by spectral methods because of the solution continuity requirements that these higher-order operators demand. Higher-order operators are more effective, bringing a higher degree of representation to the solution locally. Second-order operators, for example, tend to relax the solution to a linear variation locally, whereas a high-order operator will tend to relax the solution to a second-order polynomial locally. The form of the dissipation is also important. For example, the dissipation may only be applied (as it is in this work) in the streamline direction. While for many problems, for example Large Eddy Simulation (LES), simply adding a second or fourth-order dissipation term is a perfectly satisfactory SGS model, it is well known that a consistent residual-free formulation is required for radiation transport problems. This motivated the consideration of a new Petrov-Galerkin method that is residual-free, but also benefits from the advantageous features that SGS modelling introduces. We close with a demonstration of the advantages of this new discretization method over standard Petrov

  8. A new method to evaluate the sealing reliability of the flanged connections for Molten Salt Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Li, Qiming, E-mail: liqiming@sinap.ac.cn [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); Key Laboratory of Nuclear Radiation and Nuclear Energy Technology, Chinese Academy of Sciences, Shanghai 201800 (China); Tian, Jian; Zhou, Chong [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); Key Laboratory of Nuclear Radiation and Nuclear Energy Technology, Chinese Academy of Sciences, Shanghai 201800 (China); Wang, Naxiu, E-mail: wangnaxiu@sinap.ac.cn [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); Key Laboratory of Nuclear Radiation and Nuclear Energy Technology, Chinese Academy of Sciences, Shanghai 201800 (China)

    2015-06-15

    Highlights: • We novelly valuate the sealing reliability of the flanged connections for MSRs. • We focus on the passive decrease of the leak impetus in flanged connections. • The modified flanged connections are acquired a sealing ability of self-adjustment. • Effects of redesigned flange configurations on molten salt leakage are discussed. - Abstract: The Thorium based Molten Salt Reactor (TMSR) project is a future Generation IV nuclear reactor system proposed by the Chinese Academy of Sciences with the strategic goal of meeting the growing energy needs in the Chinese economic development and social progress. It is based on liquid salts served as both fuel and primary coolant and consequently great challenges are brought into the sealing of the flanged connections. In this study, an improved prototype flange assembly is performed on the strength of the Freeze-Flange initially developed by Oak Ridge National Laboratory (ORNL). The calculation results of the finite element model established to analyze the temperature profile of the Freeze-Flange agree well with the experimental data, which indicates that the numerical simulation method is credible. For further consideration, the ideal-gas thermodynamic model, together with the mathematical approximation, is novelly borrowed to theoretically evaluate the sealing performance of the modified Freeze-Flange and the traditional double gaskets bolted flange joint. This study focuses on the passive decrease of the leak driving force due to multiple gaskets introduced in flanged connections for MSR. The effects of the redesigned flange configuration on molten salt leakage resistance are discussed in detail.

  9. A new method to evaluate the sealing reliability of the flanged connections for Molten Salt Reactors

    International Nuclear Information System (INIS)

    Li, Qiming; Tian, Jian; Zhou, Chong; Wang, Naxiu

    2015-01-01

    Highlights: • We novelly valuate the sealing reliability of the flanged connections for MSRs. • We focus on the passive decrease of the leak impetus in flanged connections. • The modified flanged connections are acquired a sealing ability of self-adjustment. • Effects of redesigned flange configurations on molten salt leakage are discussed. - Abstract: The Thorium based Molten Salt Reactor (TMSR) project is a future Generation IV nuclear reactor system proposed by the Chinese Academy of Sciences with the strategic goal of meeting the growing energy needs in the Chinese economic development and social progress. It is based on liquid salts served as both fuel and primary coolant and consequently great challenges are brought into the sealing of the flanged connections. In this study, an improved prototype flange assembly is performed on the strength of the Freeze-Flange initially developed by Oak Ridge National Laboratory (ORNL). The calculation results of the finite element model established to analyze the temperature profile of the Freeze-Flange agree well with the experimental data, which indicates that the numerical simulation method is credible. For further consideration, the ideal-gas thermodynamic model, together with the mathematical approximation, is novelly borrowed to theoretically evaluate the sealing performance of the modified Freeze-Flange and the traditional double gaskets bolted flange joint. This study focuses on the passive decrease of the leak driving force due to multiple gaskets introduced in flanged connections for MSR. The effects of the redesigned flange configuration on molten salt leakage resistance are discussed in detail

  10. RELIABILITY AND ACCURACY ASSESSMENT OF INVASIVE AND NON- INVASIVE SEISMIC METHODS FOR SITE CHARACTERIZATION: FEEDBACK FROM THE INTERPACIFIC PROJECT

    OpenAIRE

    Garofalo , F.; Foti , S.; Hollender , F.; Bard , P.-Y.; Cornou , C.; Cox , B.R.; Dechamp , A.; Ohrnberger , M.; Sicilia , D.; Vergniault , C.

    2017-01-01

    International audience; The InterPacific project (Intercomparison of methods for site parameter and velocity profile characterization) aims to assess the reliability of seismic site characterization methods (borehole and surface wave methods) used for estimating shear wave velocity (VS) profiles and other related parameters (e.g., VS30). Three sites, representative of different geological conditions relevant for the evaluation of seismic site response effects, have been selected: (1) a hard r...

  11. High order aberrations calculation of a hexapole corrector using a differential algebra method

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Yongfeng, E-mail: yfkang@mail.xjtu.edu.cn [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Liu, Xing [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Zhao, Jingyi, E-mail: jingyi.zhao@foxmail.com [School of Science, Chang’an University, Xi’an 710064 (China); Tang, Tiantong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China)

    2017-02-21

    A differential algebraic (DA) method is proved as an unusual and effective tool in numerical analysis. It implements conveniently differentiation up to arbitrary high order, based on the nonstandard analysis. In this paper, the differential algebra (DA) method has been employed to compute the high order aberrations up to the fifth order of a practical hexapole corrector including round lenses and hexapole lenses. The program has been developed and tested as well. The electro-magnetic fields of arbitrary point are obtained by local analytic expressions, then field potentials are transformed into new forms which can be operated in the DA calculation. In this paper, the geometric and chromatic aberrations up to fifth order of a practical hexapole corrector system are calculated by the developed program.

  12. Doppler Radar Vital Signs Detection Method Based on Higher Order Cyclostationary.

    Science.gov (United States)

    Yu, Zhibin; Zhao, Duo; Zhang, Zhiqiang

    2017-12-26

    Due to the non-contact nature, using Doppler radar sensors to detect vital signs such as heart and respiration rates of a human subject is getting more and more attention. However, the related detection-method research meets lots of challenges due to electromagnetic interferences, clutter and random motion interferences. In this paper, a novel third-order cyclic cummulant (TOCC) detection method, which is insensitive to Gaussian interference and non-cyclic signals, is proposed to investigate the heart and respiration rate based on continuous wave Doppler radars. The k -th order cyclostationary properties of the radar signal with hidden periodicities and random motions are analyzed. The third-order cyclostationary detection theory of the heart and respiration rate is studied. Experimental results show that the third-order cyclostationary approach has better estimation accuracy for detecting the vital signs from the received radar signal under low SNR, strong clutter noise and random motion interferences.

  13. An efficient modularized sample-based method to estimate the first-order Sobol' index

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.

  14. Basic Concepts in Classical Test Theory: Tests Aren't Reliable, the Nature of Alpha, and Reliability Generalization as a Meta-analytic Method.

    Science.gov (United States)

    Helms, LuAnn Sherbeck

    This paper discusses the fact that reliability is about scores and not tests and how reliability limits effect sizes. The paper also explores the classical reliability coefficients of stability, equivalence, and internal consistency. Stability is concerned with how stable test scores will be over time, while equivalence addresses the relationship…

  15. Entropy Viscosity Method for High-Order Approximations of Conservation Laws

    KAUST Repository

    Guermond, J. L.

    2010-09-17

    A stabilization technique for conservation laws is presented. It introduces in the governing equations a nonlinear dissipation function of the residual of the associated entropy equation and bounded from above by a first order viscous term. Different two-dimensional test cases are simulated - a 2D Burgers problem, the "KPP rotating wave" and the Euler system - using high order methods: spectral elements or Fourier expansions. Details on the tuning of the parameters controlling the entropy viscosity are given. © 2011 Springer.

  16. Entropy Viscosity Method for High-Order Approximations of Conservation Laws

    KAUST Repository

    Guermond, J. L.; Pasquetti, R.

    2010-01-01

    A stabilization technique for conservation laws is presented. It introduces in the governing equations a nonlinear dissipation function of the residual of the associated entropy equation and bounded from above by a first order viscous term. Different two-dimensional test cases are simulated - a 2D Burgers problem, the "KPP rotating wave" and the Euler system - using high order methods: spectral elements or Fourier expansions. Details on the tuning of the parameters controlling the entropy viscosity are given. © 2011 Springer.

  17. Quantitative developments in the cognitive reliability and error analysis method (CREAM) for the assessment of human performance

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Librizzi, Massimo

    2006-01-01

    The current 'second generation' approaches in human reliability analysis focus their attention on the contextual conditions under which a given action is performed rather than on the notion of inherent human error probabilities, as was done in the earlier 'first generation' techniques. Among the 'second generation' methods, this paper considers the Cognitive Reliability and Error Analysis Method (CREAM) and proposes some developments with respect to a systematic procedure for computing probabilities of action failure. The starting point for the quantification is a previously introduced fuzzy version of the CREAM paradigm which is here further extended to include uncertainty on the qualification of the conditions under which the action is performed and to account for the fact that the effects of the common performance conditions (CPCs) on performance reliability may not all be equal. By the proposed approach, the probability of action failure is estimated by rating the performance conditions in terms of their effect on the action

  18. A high-order boundary integral method for surface diffusions on elastically stressed axisymmetric rods.

    Science.gov (United States)

    Li, Xiaofan; Nie, Qing

    2009-07-01

    Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integral method for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratures along with an extrapolation technique, leading to an arbitrarily high-order quadrature; in addition, a high-order (temporal) integration factor method, based on explicit representation of the mean curvature, is used to reduce the stability constraint on time-step. To apply this method to a periodic (in axial direction) and axi-symmetric elastically stressed cylinder, we also present a fast and accurate summation method for the periodic Green's functions of isotropic elasticity. Using the high-order boundary integral method, we demonstrate that in absence of elasticity the cylinder surface pinches in finite time at the axis of the symmetry and the universal cone angle of the pinching is found to be consistent with the previous studies based on a self-similar assumption. In the presence of elastic stress, we show that a finite time, geometrical singularity occurs well before the cylindrical solid collapses onto the axis of symmetry, and the angle of the corner singularity on the cylinder surface is also estimated.

  19. Higher order analytical approximate solutions to the nonlinear pendulum by He's homotopy method

    International Nuclear Information System (INIS)

    Belendez, A; Pascual, C; Alvarez, M L; Mendez, D I; Yebra, M S; Hernandez, A

    2009-01-01

    A modified He's homotopy perturbation method is used to calculate the periodic solutions of a nonlinear pendulum. The method has been modified by truncating the infinite series corresponding to the first-order approximate solution and substituting a finite number of terms in the second-order linear differential equation. As can be seen, the modified homotopy perturbation method works very well for high values of the initial amplitude. Excellent agreement of the analytical approximate period with the exact period has been demonstrated not only for small but also for large amplitudes A (the relative error is less than 1% for A < 152 deg.). Comparison of the result obtained using this method with the exact ones reveals that this modified method is very effective and convenient.

  20. A second order discontinuous Galerkin fast sweeping method for Eikonal equations

    Science.gov (United States)

    Li, Fengyan; Shu, Chi-Wang; Zhang, Yong-Tao; Zhao, Hongkai

    2008-09-01

    In this paper, we construct a second order fast sweeping method with a discontinuous Galerkin (DG) local solver for computing viscosity solutions of a class of static Hamilton-Jacobi equations, namely the Eikonal equations. Our piecewise linear DG local solver is built on a DG method developed recently [Y. Cheng, C.-W. Shu, A discontinuous Galerkin finite element method for directly solving the Hamilton-Jacobi equations, Journal of Computational Physics 223 (2007) 398-415] for the time-dependent Hamilton-Jacobi equations. The causality property of Eikonal equations is incorporated into the design of this solver. The resulting local nonlinear system in the Gauss-Seidel iterations is a simple quadratic system and can be solved explicitly. The compactness of the DG method and the fast sweeping strategy lead to fast convergence of the new scheme for Eikonal equations. Extensive numerical examples verify efficiency, convergence and second order accuracy of the proposed method.