Sample records for reliable core analysis

  1. Reliable Quantitative SERS Analysis Facilitated by Core-Shell Nanoparticles with Embedded Internal Standards.

    Shen, Wei; Lin, Xuan; Jiang, Chaoyang; Li, Chaoyu; Lin, Haixin; Huang, Jingtao; Wang, Shuo; Liu, Guokun; Yan, Xiaomei; Zhong, Qiling; Ren, Bin


    Quantitative analysis is a great challenge in surface-enhanced Raman scattering (SERS). Core-molecule-shell nanoparticles with two components in the molecular layer, a framework molecule to form the shell, and a probe molecule as a Raman internal standard, were rationally designed for quantitative SERS analysis. The signal of the embedded Raman probe provides effective feedback to correct the fluctuation of samples and measuring conditions. Meanwhile, target molecules with different affinities can be adsorbed onto the shell. The quantitative analysis of target molecules over a large concentration range has been demonstrated with a linear response of the relative SERS intensity versus the surface coverage, which has not been achieved by conventional SERS methods.

  2. Citation analysis did not provide a reliable assessment of core outcome set uptake.

    Barnes, Karen L; Kirkham, Jamie J; Clarke, Mike; Williamson, Paula R


    The aim of the study was to evaluate citation analysis as an approach to measuring core outcome set (COS) uptake, by assessing whether the number of citations for a COS report could be used as a surrogate measure of uptake of the COS by clinical trialists. Citation data were obtained for COS reports published before 2010 in five disease areas (systemic sclerosis, rheumatoid arthritis, eczema, sepsis and critical care, and female sexual dysfunction). Those publications identified as a report of a clinical trial were examined to identify whether or not all outcomes in the COS were measured in the trial. Clinical trials measuring the relevant COS made up a small proportion of the total number of citations for COS reports. Not all trials citing a COS report measured all the recommended outcomes. Some trials cited the COS reports for other reasons, including the definition of a condition or other trial design issues addressed by the COS report. Although citation data can be readily accessed, it should not be assumed that the citing of a COS report indicates that a trial has measured the recommended COS. Alternative methods for assessing COS uptake are needed. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Challenges Regarding IP Core Functional Reliability

    Berg, Melanie D.; LaBel, Kenneth A.


    For many years, intellectual property (IP) cores have been incorporated into field programmable gate array (FPGA) and application specific integrated circuit (ASIC) design flows. However, the usage of large complex IP cores were limited within products that required a high level of reliability. This is no longer the case. IP core insertion has become mainstream including their use in highly reliable products. Due to limited visibility and control, challenges exist when using IP cores and subsequently compromise product reliability. We discuss challenges and suggest potential solutions to critical application IP insertion.

  4. Power electronics reliability analysis.

    Smith, Mark A.; Atcitty, Stanley


    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  5. Multidisciplinary System Reliability Analysis

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)


    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  6. System Reliability Analysis: Foundations.


    performance formulas for systems subject to pre- ventive maintenance are given. V * ~, , 9 D -2 SYSTEM RELIABILITY ANALYSIS: FOUNDATIONS Richard E...reliability in this case is V P{s can communicate with the terminal t = h(p) Sp2(((((p p)p) p)p)gp) + p(l -p)(((pL p)p)(p 2 JLp)) + p(l -p)((p(p p...For undirected networks, the basic reference is A. Satyanarayana and Kevin Wood (1982). For directed networks, the basic reference is Avinash

  7. ATLAS reliability analysis

    Bartsch, R.R.


    Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.

  8. Further HTGR core support structure reliability studies. Interim report No. 1

    Platus, D.L.


    Results of a continuing effort to investigate high temperature gas cooled reactor (HTGR) core support structure reliability are described. Graphite material and core support structure component physical, mechanical and strength properties required for the reliability analysis are identified. Also described are experimental and associated analytical techniques for determining the required properties, a procedure for determining number of tests required, properties that might be monitored by special surveillance of the core support structure to improve reliability predictions, and recommendations for further studies. Emphasis in the study is directed towards developing a basic understanding of graphite failure and strength degradation mechanisms; and validating analytical methods for predicting strength and strength degradation from basic material properties.

  9. Reliability Analysis of High Rockfill Dam Stability

    Ping Yi


    Full Text Available A program 3DSTAB combining slope stability analysis and reliability analysis is developed and validated. In this program, the limit equilibrium method is utilized to calculate safety factors of critical slip surfaces. The first-order reliability method is used to compute reliability indexes corresponding to critical probabilistic surfaces. When derivatives of the performance function are calculated by finite difference method, the previous iteration’s critical slip surface is saved and used. This sequential approximation strategy notably improves efficiency. Using this program, the stability reliability analyses of concrete faced rockfill dams and earth core rockfill dams with different heights and different slope ratios are performed. The results show that both safety factors and reliability indexes decrease as the dam’s slope increases at a constant height and as the dam’s height increases at a constant slope. They decrease dramatically as the dam height increases from 100 m to 200 m while they decrease slowly once the dam height exceeds 250 m, which deserves attention. Additionally, both safety factors and reliability indexes of the upstream slope of earth core rockfill dams are higher than that of the downstream slope. Thus, the downstream slope stability is the key failure mode for earth core rockfill dams.

  10. Continuously Optimized Reliable Energy (CORE) Microgrid: Models & Tools (Fact Sheet)


    This brochure describes Continuously Optimized Reliable Energy (CORE), a trademarked process NREL employs to produce conceptual microgrid designs. This systems-based process enables designs to be optimized for economic value, energy surety, and sustainability. Capabilities NREL offers in support of microgrid design are explained.

  11. Reliability Analysis of Wind Turbines

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard


    In order to minimise the total expected life-cycle costs of a wind turbine it is important to estimate the reliability level for all components in the wind turbine. This paper deals with reliability analysis for the tower and blades of onshore wind turbines placed in a wind farm. The limit states...... consideres are in the ultimate limit state (ULS) extreme conditions in the standstill position and extreme conditions during operating. For wind turbines, where the magnitude of the loads is influenced by the control system, the ultimate limit state can occur in both cases. In the fatigue limit state (FLS......) the reliability level for a wind turbine placed in a wind farm is considered, and wake effects from neighbouring wind turbines is taken into account. An illustrative example with calculation of the reliability for mudline bending of the tower is considered. In the example the design is determined according...

  12. Reliability analysis in intelligent machines

    Mcinroy, John E.; Saridis, George N.


    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  13. Hybrid reliability model for fatigue reliability analysis of steel bridges

    曹珊珊; 雷俊卿


    A kind of hybrid reliability model is presented to solve the fatigue reliability problems of steel bridges. The cumulative damage model is one kind of the models used in fatigue reliability analysis. The parameter characteristics of the model can be described as probabilistic and interval. The two-stage hybrid reliability model is given with a theoretical foundation and a solving algorithm to solve the hybrid reliability problems. The theoretical foundation is established by the consistency relationships of interval reliability model and probability reliability model with normally distributed variables in theory. The solving process is combined with the definition of interval reliability index and the probabilistic algorithm. With the consideration of the parameter characteristics of theS−N curve, the cumulative damage model with hybrid variables is given based on the standards from different countries. Lastly, a case of steel structure in the Neville Island Bridge is analyzed to verify the applicability of the hybrid reliability model in fatigue reliability analysis based on the AASHTO.

  14. Sensitivity Analysis of Component Reliability



    In a system, Every component has its unique position within system and its unique failure characteristics. When a component's reliability is changed, its effect on system reliability is not equal. Component reliability sensitivity is a measure of effect on system reliability while a component's reliability is changed. In this paper, the definition and relative matrix of component reliability sensitivity is proposed, and some of their characteristics are analyzed. All these will help us to analyse or improve the system reliability.

  15. Reliability of core test – Critical assessment and proposed new approach

    Shafik Khoury


    Full Text Available Core test is commonly required in the area of concrete industry to evaluate the concrete strength and sometimes it becomes the unique tool for safety assessment of existing concrete structures. Core test is therefore introduced in most codes. An extensive literature survey on different international codes’ provisions; including the Egyptian, British, European and ACI Codes, for core analysis is presented. All studied codes’ provisions seem to be unreliable for predicting the in-situ concrete cube strength from the results of core tests. A comprehensive experimental study was undertaken to examine the factors affecting the interpretation of core test results. The program involves four concrete mixes, three concrete grades (18, 30 and 48 MPa, five core diameters (1.5, 2, 3, 4 and 6 in., five core aspect ratios (between 1 and 2, two types of coarse aggregates (pink lime stone and gravel, two coring directions, three moisture conditions and 18 different steel arrangements. Prototypes for concrete slabs and columns were constructed. More than 500 cores were prepared and tested in addition to tremendous number of concrete cubes and cylinders. Results indicate that the core strength reduces with the increase in aspect ratio, the reduction in core diameter, the presence of reinforcing steel, the incorporation of gravel in concrete, the increase in core moisture content, the drilling perpendicular to casting direction, and the reduction in concrete strength. The Egyptian code provision for core interpretation is critically examined. Based on the experimental evidences throughout this study, statistical analysis has been performed to determine reliable strength correction factors that account for the studied variables. A simple weighted regression analysis of a model without an intercept was carried out using the “SAS Software” package as well as “Data Fit” software. A new model for interpretation of core test results is proposed considering

  16. Reliability Analysis of Sensor Networks

    JIN Yan; YANG Xiao-zong; WANG Ling


    To Integrate the capacity of sensing, communication, computing, and actuating, one of the compelling technological advances of these years has been the appearance of distributed wireless sensor network (DSN) for information gathering tasks. In order to save the energy, multi-hop routing between the sensor nodes and the sink node is necessary because of limited resource. In addition, the unpredictable conditional factors make the sensor nodes unreliable. In this paper, the reliability of routing designed for sensor network and some dependability issues of DSN, such as MTTF(mean time to failure) and the probability of connectivity between the sensor nodes and the sink node are analyzed.Unfortunately, we could not obtain the accurate result for the arbitrary network topology, which is # P-hard problem.And the reliability analysis of restricted topologies clustering-based is given. The method proposed in this paper will show us a constructive idea about how to place energyconstrained sensor nodes in the network efficiently from the prospective of reliability.

  17. Reliability and Practicality of the Core Score: Four Dynamic Core Stability Tests Performed in a Physician Office Setting.

    Friedrich, Jason; Brakke, Rachel; Akuthota, Venu; Sullivan, William


    Pilot study to determine the practicality and inter-rater reliability of the "Core Score," a composite measure of 4 clinical core stability tests. Repeated measures. Academic hospital physician clinic. 23 healthy volunteers with mean age of 32 years (12 females, 11 males). All subjects performed 4 core stability maneuvers under direct observation from 3 independent physicians in sequence. Inter-rater reliability and time necessary to perform examination. The Core Score scale is 0 to 12, with 12 reflecting the best core stability. The mean composite score of all 4 tests for all subjects was 9.54 (SD, 1.897; range, 4-12). The intraclass correlation coefficients (ICC 1,1) for inter-rater reliability for the composite Core Score and 4 individual tests were 0.68 (Core Score), 0.14 (single-leg squat), 0.40 (supine bridge), 0.69 (side bridge), and 0.46 (prone bridge). The time required for a single examiner to assess a given subject's core stability in all 4 maneuvers averaged 4 minutes (range, 2-6 minutes). Even without specialized equipment, a clinically practical and moderately reliable measure of core stability may be possible. Further research is necessary to optimize this measure for clinical application. Despite the known value of core stability to athletes and patients with low back pain, there is currently no reliable and practical means for rating core stability in a typical office-based practice. This pilot study provides a starting point for future reliability research on clinical core stability assessments.

  18. Creep-rupture reliability analysis

    Peralta-Duran, A.; Wirsching, P. H.


    A probabilistic approach to the correlation and extrapolation of creep-rupture data is presented. Time temperature parameters (TTP) are used to correlate the data, and an analytical expression for the master curve is developed. The expression provides a simple model for the statistical distribution of strength and fits neatly into a probabilistic design format. The analysis focuses on the Larson-Miller and on the Manson-Haferd parameters, but it can be applied to any of the TTP's. A method is developed for evaluating material dependent constants for TTP's. It is shown that optimized constants can provide a significant improvement in the correlation of the data, thereby reducing modelling error. Attempts were made to quantify the performance of the proposed method in predicting long term behavior. Uncertainty in predicting long term behavior from short term tests was derived for several sets of data. Examples are presented which illustrate the theory and demonstrate the application of state of the art reliability methods to the design of components under creep.

  19. On Bayesian System Reliability Analysis

    Soerensen Ringi, M.


    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person`s state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs.

  20. Core-seis: a code for LMFBR core seismic analysis

    Chellapandi, P.; Ravi, R.; Chetal, S.C.; Bhoje, S.B. [Indira Gandhi Centre for Atomic Research, Kalpakkam (India). Reactor Group


    This paper deals with a computer code CORE-SEIS specially developed for seismic analysis of LMFBR core configurations. For demonstrating the prediction capability of the code, results are presented for one of the MONJU reactor core mock ups which deals with a cluster of 37 subassemblies kept in water. (author). 3 refs., 7 figs., 2 tabs.

  1. Reliability Analysis of Money Habitudes

    Delgadillo, Lucy M.; Bushman, Brittani S.


    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  2. Reliability Analysis of Money Habitudes

    Delgadillo, Lucy M.; Bushman, Brittani S.


    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  3. The validity and reliability of a dynamic neuromuscular stabilization-heel sliding test for core stability.

    Cha, Young Joo; Lee, Jae Jin; Kim, Do Hyun; You, Joshua Sung H


    Core stabilization plays an important role in the regulation of postural stability. To overcome shortcomings associated with pain and severe core instability during conventional core stabilization tests, we recently developed the dynamic neuromuscular stabilization-based heel sliding (DNS-HS) test. The purpose of this study was to establish the criterion validity and test-retest reliability of the novel DNS-HS test. Twenty young adults with core instability completed both the bilateral straight leg lowering test (BSLLT) and DNS-HS test for the criterion validity study and repeated the DNS-HS test for the test-retest reliability study. Criterion validity was determined by comparing hip joint angle data that were obtained from BSLLT and DNS-HS measures. The test-retest reliability was determined by comparing hip joint angle data. Criterion validity was (ICC2,3) = 0.700 (pcore stability measures. Test-retest reliability was (ICC3,3) = 0.953 (pcore stability measures. Test-retest reliability data suggests that DNS-HS core stability was a reliable test for core stability. Clinically, the DNS-HS test is useful to objectively quantify core instability and allow early detection and evaluation.

  4. Combination of structural reliability and interval analysis

    Zhiping Qiu; Di Yang; saac Elishakoff


    In engineering applications,probabilistic reliability theory appears to be presently the most important method,however,in many cases precise probabilistic reliability theory cannot be considered as adequate and credible model of the real state of actual affairs.In this paper,we developed a hybrid of probabilistic and non-probabilistic reliability theory,which describes the structural uncertain parameters as interval variables when statistical data are found insufficient.By using the interval analysis,a new method for calculating the interval of the structural reliability as well as the reliability index is introduced in this paper,and the traditional probabilistic theory is incorporated with the interval analysis.Moreover,the new method preserves the useful part of the traditional probabilistic reliability theory,but removes the restriction of its strict requirement on data acquisition.Example is presented to demonstrate the feasibility and validity of the proposed theory.

  5. Substitution of supplementary subtests for core subtests on composite reliability of WAIS--IV Indexes.

    Ryan, Joseph J; Glass, Laura A


    The effects of replacing core subtests with supplementary subtests on composite-score reliabilities were evaluated for the WAIS-IV Indexes. Composite score reliabilities and SEMs (i.e., confidence intervals around obtained scores) are provided for the 13 unique Index scores calculated following the subtest substitution guidelines of Wechsler in 2008. In all instances, unique Index composite-score reliabilities were comparable to their respective core Index score composite reliabilities, and measurement error never increased by more than 1 point. Using the standard Verbal Comprehension Index and Perceptual Reasoning Index and the unique subtest combinations for the Working Memory and Processing Speed indexes, which have the lowest composite-score reliabilities, decreased Full Scale composite reliability by .01, while the associated confidence interval of +/- 6 represents an increase in measurement error of 1 IQ point.

  6. Developing a Reliable Core Stability Assessment Battery for Patients with Nonspecific Low Back Pain.

    Ozcan Kahraman, Buse; Salik Sengul, Yesim; Kahraman, Turhan; Kalemci, Orhan


    Test-retest design. The objective was to examine the intrarater (test-retest) reliability of the core stability related tests and to develop a reliable core stability assessment battery. Studies suggest that core stability exercises may improve function and decrease pain in patients with nonspecific low back pain (LBP). Reliable clinical tests are required to implement adequate rehabilitation and to evaluate results of these interventions. The study had a test-retest design. Thirty-three different tests that might relate to core stability were identified with their mostly used protocols. Five different components of core stability including endurance, flexibility, strength, functional performance, and motor control were assessed in 38 patients with nonspecific LBP. The same testing procedure was performed again after 48 to 72 hours. Intraclass correlation coefficients (ICCs), standard error of measurement, and minimal detectable change were calculated to assess the intrarater reliability. The intrarater reliability of the tests ranged from little to very high (ICC = 0.08-0.98). Partial curl-up (ICC = 0.90), lateral bridge (ICC = 0.95-0.96), trunk flexor endurance (ICC = 0.97), sit and reach (ICC = 0.98), single-legged hop (ICC = 0.98-0.97), lateral step-down (ICC = 0.93-0.92), eyes open right and left leg unilateral stance (ICC = 0.97 and 0.91) tests had the highest intrarater reliability for each core stability component. The results indicated that the partial curl-up test (strength), side bridge and trunk flexor tests (endurance), sit-and-reach test (flexibility), single-legged hop, and lateral step-down (functional), unilateral stance test with eyes open (motor control) had very high intrarater reliability. A core stability assessment battery involving these tests can be used in patients with nonspecific LBP to assess all components of core stability. 3.

  7. Reliability, construct validity and measurement potential of the ICF comprehensive core set for osteoarthritis

    Kurtaiş Yeşim


    Full Text Available Abstract Background This study aimed to investigate the reliability and construct validity of the International Classification of Functioning, Disability and Health (ICF Comprehensive Core Set for osteoarthritis (OA in order to test its possible use as a measuring tool for functioning. Methods 100 patients with OA (84 F, 16 M; mean age 63 yr completed forms including demographic and clinical information besides the Short Form (36 Health Survey (SF-36® and the Western Ontario and McMaster Universities Index of Osteoarthritis (WOMAC. The ICF Comprehensive Core Set for OA was filled by health professionals. The internal construct validities of "Body Functions-Body structures" (BF-BS, "Activity" (A, "Participation" (P and "Environmental Factors" (EF domains were tested by Rasch analysis and reliability by internal consistency and person separation index (PSI. External construct validity was evaluated by correlating the Rasch transformed scores with SF-36 and WOMAC. Results In each scale, some items showing disordered thresholds were rescored, testlets were created to overcome the problem of local dependency and items that did not fit to the Rasch model were deleted. The internal construct validity of the four scales (BF-BS 16 items, A 8 items, P 7 items, EF 13 items were good [mean item fit (SD 0.138 (0.921, 0.216 (1.237, 0.759 (0.986 and -0.079 (2.200; person item fit (SD -0.147 (0.652, -0.241 (0.894, -0.310 (1.187 and -0.491 (1.173 respectively], indicating a single underlying construct for each scale. The scales were free of differential item functioning (DIF for age, gender, years of education and duration of disease. Reliabilities of the BF-BS, A, P, and EF scales were good with Cronbach's alphas of 0.79, 0.86, 0.88, and 0.83 and PSI's of 0.76, 0.86, 0.87, and 0.71, respectively. Rasch scores of BF-BS, A, and P showed moderate correlations with SF-36 and WOMAC scores where the EF had significant but weak correlations only with SF36-Social

  8. Integrated Methodology for Software Reliability Analysis

    Marian Pompiliu CRISTESCU


    Full Text Available The most used techniques to ensure safety and reliability of the systems are applied together as a whole, and in most cases, the software components are usually overlooked or to little analyzed. The present paper describes the applicability of fault trees analysis software system, analysis defined as Software Fault Tree Analysis (SFTA, fault trees are evaluated using binary decision diagrams, all of these being integrated and used with help from Java library reliability.

  9. Reliability Sensitivity Analysis for Location Scale Family

    洪东跑; 张海瑞


    Many products always operate under various complex environment conditions. To describe the dynamic influence of environment factors on their reliability, a method of reliability sensitivity analysis is proposed. In this method, the location parameter is assumed as a function of relevant environment variables while the scale parameter is assumed as an un- known positive constant. Then, the location parameter function is constructed by using the method of radial basis function. Using the varied environment test data, the log-likelihood function is transformed to a generalized linear expression by de- scribing the indicator as Poisson variable. With the generalized linear model, the maximum likelihood estimations of the model coefficients are obtained. With the reliability model, the reliability sensitivity is obtained. An instance analysis shows that the method is feasible to analyze the dynamic variety characters of reliability along with environment factors and is straightforward for engineering application.

  10. Space Mission Human Reliability Analysis (HRA) Project

    National Aeronautics and Space Administration — The purpose of this project is to extend current ground-based Human Reliability Analysis (HRA) techniques to a long-duration, space-based tool to more effectively...

  11. Production Facility System Reliability Analysis Report

    Dale, Crystal Buchanan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This document describes the reliability, maintainability, and availability (RMA) modeling of the Los Alamos National Laboratory (LANL) design for the Closed Loop Helium Cooling System (CLHCS) planned for the NorthStar accelerator-based 99Mo production facility. The current analysis incorporates a conceptual helium recovery system, beam diagnostics, and prototype control system into the reliability analysis. The results from the 1000 hr blower test are addressed.

  12. How young can children reliably and validly self-report their health-related quality of life?: An analysis of 8,591 children across age subgroups with the PedsQL™ 4.0 Generic Core Scales

    Burwinkle Tasha M


    Full Text Available Abstract Background The last decade has evidenced a dramatic increase in the development and utilization of pediatric health-related quality of life (HRQOL measures in an effort to improve pediatric patient health and well-being and determine the value of healthcare services. The emerging paradigm shift toward patient-reported outcomes (PROs in clinical trials has provided the opportunity to further emphasize the value and essential need for pediatric patient self-reported outcomes measurement. Data from the PedsQL™ DatabaseSM were utilized to test the hypothesis that children as young as 5 years of age can reliably and validly report their HRQOL. Methods The sample analyzed represented child self-report age data on 8,591 children ages 5 to 16 years from the PedsQL™ 4.0 Generic Core Scales DatabaseSM. Participants were recruited from general pediatric clinics, subspecialty clinics, and hospitals in which children were being seen for well-child checks, mild acute illness, or chronic illness care (n = 2,603, 30.3%, and from a State Children's Health Insurance Program (SCHIP in California (n = 5,988, 69.7%. Results Items on the PedsQL™ 4.0 Generic Core Scales had minimal missing responses for children as young as 5 years old, supporting feasibility. The majority of the child self-report scales across the age subgroups, including for children as young as 5 years, exceeded the minimum internal consistency reliability standard of 0.70 required for group comparisons, while the Total Scale Scores across the age subgroups approached or exceeded the reliability criterion of 0.90 recommended for analyzing individual patient scale scores. Construct validity was demonstrated utilizing the known groups approach. For each PedsQL™ scale and summary score, across age subgroups, including children as young as 5 years, healthy children demonstrated a statistically significant difference in HRQOL (better HRQOL than children with a known chronic health

  13. Modelling application for cognitive reliability and error analysis method

    Fabio De Felice


    Full Text Available The automation of production systems has delegated to machines the execution of highly repetitive and standardized tasks. In the last decade, however, the failure of the automatic factory model has led to partially automated configurations of production systems. Therefore, in this scenario, centrality and responsibility of the role entrusted to the human operators are exalted because it requires problem solving and decision making ability. Thus, human operator is the core of a cognitive process that leads to decisions, influencing the safety of the whole system in function of their reliability. The aim of this paper is to propose a modelling application for cognitive reliability and error analysis method.

  14. Core muscle strength and endurance measures in ambulatory persons with multiple sclerosis: validity and reliability.

    Fry, Donna K; Huang, Min; Rodda, Becky J


    This study examined the test-retest reliability and validity of three core muscle strength tests in individuals with multiple sclerosis (MS). Twenty-one ambulatory individuals with MS completed the curl-up, flexor endurance, and pelvic tilt stabilization tests of core muscle strength. They were retested 1-2 weeks after the first test. The sit-to-stand (STS) test was also conducted on the first test. Descriptive statistics, intraclass correlation coefficients, SEM, and minimal detectable change (MDC) were calculated for each test. Pearson's correlations were calculated between all variables for the first test date. The curl-up test demonstrated excellent test-retest reliability (intraclass correlation coefficient=0.995), requiring 3.4 additional repetitions in 60 s to demonstrate a detectable change. The curl-up test was moderately correlated with the STS. The flexor endurance and pelvic tilt stabilization tests demonstrated moderate test-retest reliability, with relatively large SEMs and MDCs and only a low correlation with the STS. The curl-up test is recommended as a valid and reliable test of core muscle strength in individuals with MS. The flexor endurance test and the pelvic tilt stabilization test of core muscle strength are not recommended due to large SEM and MDC scores. Further study of core muscle strength and endurance measures is indicated to seek additional tests that are valid and reliable in the MS population.

  15. Structural reliability analysis and reliability-based design optimization: Recent advances

    Qiu, ZhiPing; Huang, Ren; Wang, XiaoJun; Qi, WuChao


    We review recent research activities on structural reliability analysis, reliability-based design optimization (RBDO) and applications in complex engineering structural design. Several novel uncertainty propagation methods and reliability models, which are the basis of the reliability assessment, are given. In addition, recent developments on reliability evaluation and sensitivity analysis are highlighted as well as implementation strategies for RBDO.

  16. Multi-Disciplinary System Reliability Analysis

    Mahadevan, Sankaran; Han, Song


    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  17. Reliability Analysis of DOOF for Weibull Distribution

    陈文华; 崔杰; 樊小燕; 卢献彪; 相平


    Hierarchical Bayesian method for estimating the failure probability under DOOF by taking the quasi-Beta distribution as the prior distribution is proposed in this paper. The weighted Least Squares Estimate method was used to obtain the formula for computing reliability distribution parameters and estimating the reliability characteristic values under DOOF. Taking one type of aerospace electrical connector as an example, the correctness of the above method through statistical analysis of electrical connector accelerated life test data was verified.

  18. Reliability analysis of flood defence systems

    Steenbergen, H.M.G.M.; Lassing, B.L.; Vrouwenvelder, A.C.W.M.; Waarts, P.H.


    In recent years an advanced program for the reliability analysis of flood defence systems has been under development. This paper describes the global data requirements for the application and the setup of the models. The analysis generates the probability of system failure and the contribution of ea

  19. Modified Y-TZP core design improves all-ceramic crown reliability.

    Silva, N R F A; Bonfante, E A; Rafferty, B T; Zavanelli, R A; Rekow, E D; Thompson, V P; Coelho, P G


    This study tested the hypothesis that all-ceramic core-veneer system crown reliability is improved by modification of the core design. We modeled a tooth preparation by reducing the height of proximal walls by 1.5 mm and the occlusal surface by 2.0 mm. The CAD-based tooth preparation was replicated and positioned in a dental articulator for core and veneer fabrication. Standard (0.5 mm uniform thickness) and modified (2.5 mm height lingual and proximal cervical areas) core designs were produced, followed by the application of veneer porcelain for a total thickness of 1.5 mm. The crowns were cemented to 30-day-aged composite dies and were either single-load-to-failure or step-stress-accelerated fatigue-tested. Use of level probability plots showed significantly higher reliability for the modified core design group. The fatigue fracture modes were veneer chipping not exposing the core for the standard group, and exposing the veneer core interface for the modified group.



    performance of any structural system be eva ... by the Joint crete slabs, bending, shear, deflection, reliability, design codes. ement such as ... could be sensitive to this distribution. Table 1: ..... Ang, A. H-S and Tang, W. H. Probability Concepts in.

  1. Culture Representation in Human Reliability Analysis

    David Gertman; Julie Marble; Steven Novack


    Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991) cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.

  2. Raising the Reliability of Forming Rolls by Alloying Their Core with Copper

    Zhizhkina, N. A.


    The mechanical properties and the structure of forming rolls from cast irons of different compositions are studied. A novel iron including a copper additive that lowers its chilling and raises the homogeneity of the structure is suggested for the roll cores. The use of such iron should raise the reliability of the rolls in operation.

  3. Reliability Analysis of a Steel Frame

    M. Sýkora


    Full Text Available A steel frame with haunches is designed according to Eurocodes. The frame is exposed to self-weight, snow, and wind actions. Lateral-torsional buckling appears to represent the most critical criterion, which is considered as a basis for the limit state function. In the reliability analysis, the probabilistic models proposed by the Joint Committee for Structural Safety (JCSS are used for basic variables. The uncertainty model coefficients take into account the inaccuracy of the resistance model for the haunched girder and the inaccuracy of the action effect model. The time invariant reliability analysis is based on Turkstra's rule for combinations of snow and wind actions. The time variant analysis describes snow and wind actions by jump processes with intermittencies. Assuming a 50-year lifetime, the obtained values of the reliability index b vary within the range from 3.95 up to 5.56. The cross-profile IPE 330 designed according to Eurocodes seems to be adequate. It appears that the time invariant reliability analysis based on Turkstra's rule provides considerably lower values of b than those obtained by the time variant analysis.

  4. Event/Time/Availability/Reliability-Analysis Program

    Viterna, L. A.; Hoffman, D. J.; Carr, Thomas


    ETARA is interactive, menu-driven program that performs simulations for analysis of reliability, availability, and maintainability. Written to evaluate performance of electrical power system of Space Station Freedom, but methodology and software applied to any system represented by block diagram. Program written in IBM APL.

  5. Reliability analysis of DOOF for Weibull distribution

    陈文华; 崔杰; 樊晓燕; 卢献彪; 相平


    Hierarchical Bayesian method for estimating the failure probability Pi under DOOF by taking the quasi-Beta distribution B(pi-1 , 1,1, b ) as the prior distribution is proposed in this paper. The weighted Least Squares Estimate method was used to obtain the formula for computing reliability distribution parameters and estimating the reliability characteristic values under DOOF. Taking one type of aerospace electrical connectoras an example, the correctness of the above method through statistical analysis of electrical connector acceler-ated life test data was verified.


    LI Hong-shuang; L(U) Zhen-zhou; YUE Zhu-feng


    Support vector machine (SVM) was introduced to analyze the reliability of the implicit performance function, which is difficult to implement by the classical methods such as the first order reliability method (FORM) and the Monte Carlo simulation (MCS). As a classification method where the underlying structural risk minimization inference rule is employed, SVM possesses excellent learning capacity with a small amount of information and good capability of generalization over the complete data. Hence,two approaches, i.e., SVM-based FORM and SVM-based MCS, were presented for the structural reliability analysis of the implicit limit state function. Compared to the conventional response surface method (RSM) and the artificial neural network (ANN), which are widely used to replace the implicit state function for alleviating the computation cost,the more important advantages of SVM are that it can approximate the implicit function with higher precision and better generalization under the small amount of information and avoid the "curse of dimensionality". The SVM-based reliability approaches can approximate the actual performance function over the complete sampling data with the decreased number of the implicit performance function analysis (usually finite element analysis), and the computational precision can satisfy the engineering requirement, which are demonstrated by illustrations.

  7. Human reliability analysis of control room operators

    Santos, Isaac J.A.L.; Carvalho, Paulo Victor R.; Grecco, Claudio H.S. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)


    Human reliability is the probability that a person correctly performs some system required action in a required time period and performs no extraneous action that can degrade the system Human reliability analysis (HRA) is the analysis, prediction and evaluation of work-oriented human performance using some indices as human error likelihood and probability of task accomplishment. Significant progress has been made in the HRA field during the last years, mainly in nuclear area. Some first-generation HRA methods were developed, as THERP (Technique for human error rate prediction). Now, an array of called second-generation methods are emerging as alternatives, for instance ATHEANA (A Technique for human event analysis). The ergonomics approach has as tool the ergonomic work analysis. It focus on the study of operator's activities in physical and mental form, considering at the same time the observed characteristics of operator and the elements of the work environment as they are presented to and perceived by the operators. The aim of this paper is to propose a methodology to analyze the human reliability of the operators of industrial plant control room, using a framework that includes the approach used by ATHEANA, THERP and the work ergonomics analysis. (author)

  8. Reliability Analysis of Elasto-Plastic Structures


    . Failure of this type of system is defined either as formation of a mechanism or by failure of a prescribed number of elements. In the first case failure is independent of the order in which the elements fail, but this is not so by the second definition. The reliability analysis consists of two parts...... are described and the two definitions of failure can be used by the first formulation, but only the failure definition based on formation of a mechanism by the second formulation. The second part of the reliability analysis is an estimate of the failure probability for the structure on the basis...... are obtained if the failure mechanisms are used. Lower bounds can be calculated on the basis of series systems where the elements are the non-failed elements in a non-failed structure (see Augusti & Baratta [3])....

  9. Bridging Resilience Engineering and Human Reliability Analysis

    Ronald L. Boring


    There has been strong interest in the new and emerging field called resilience engineering. This field has been quick to align itself with many existing safety disciplines, but it has also distanced itself from the field of human reliability analysis. To date, the discussion has been somewhat one-sided, with much discussion about the new insights afforded by resilience engineering. This paper presents an attempt to address resilience engineering from the perspective of human reliability analysis (HRA). It is argued that HRA shares much in common with resilience engineering and that, in fact, it can help strengthen nascent ideas in resilience engineering. This paper seeks to clarify and ultimately refute the arguments that have served to divide HRA and resilience engineering.

  10. Reliability analysis of wastewater treatment plants.

    Oliveira, Sílvia C; Von Sperling, Marcos


    This article presents a reliability analysis of 166 full-scale wastewater treatment plants operating in Brazil. Six different processes have been investigated, comprising septic tank+anaerobic filter, facultative pond, anaerobic pond+facultative pond, activated sludge, upflow anaerobic sludge blanket (UASB) reactors alone and UASB reactors followed by post-treatment. A methodology developed by Niku et al. [1979. Performance of activated sludge process and reliability-based design. J. Water Pollut. Control Assoc., 51(12), 2841-2857] is used for determining the coefficients of reliability (COR), in terms of the compliance of effluent biochemical oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids (TSS), total nitrogen (TN), total phosphorus (TP) and fecal or thermotolerant coliforms (FC) with discharge standards. The design concentrations necessary to meet the prevailing discharge standards and the expected compliance percentages have been calculated from the COR obtained. The results showed that few plants, under the observed operating conditions, would be able to present reliable performances considering the compliance with the analyzed standards. The article also discusses the importance of understanding the lognormal behavior of the data in setting up discharge standards, in interpreting monitoring results and compliance with the legislation.

  11. Analysis of circuits including magnetic cores (MTRAC)

    Hanzen, G. R.; Nitzan, D.; Herndon, J. R.


    Development of automated circuit analysis computer program to provide transient analysis of circuits with magnetic cores is discussed. Allowance is made for complications caused by nonlinearity of switching core model and magnetic coupling among loop currents. Computer program is conducted on Univac 1108 computer using FORTRAN IV.

  12. Representative Sampling for reliable data analysis

    Petersen, Lars; Esbensen, Kim Harry


    regime in order to secure the necessary reliability of: samples (which must be representative, from the primary sampling onwards), analysis (which will not mean anything outside the miniscule analytical volume without representativity ruling all mass reductions involved, also in the laboratory) and data...... analysis (“data” do not exist in isolation of their provenance). The Total Sampling Error (TSE) is by far the dominating contribution to all analytical endeavours, often 100+ times larger than the Total Analytical Error (TAE).We present a summarizing set of only seven Sampling Unit Operations (SUOs...

  13. The quantitative failure of human reliability analysis

    Bennett, C.T.


    This philosophical treatise argues the merits of Human Reliability Analysis (HRA) in the context of the nuclear power industry. Actually, the author attacks historic and current HRA as having failed in informing policy makers who make decisions based on risk that humans contribute to systems performance. He argues for an HRA based on Bayesian (fact-based) inferential statistics, which advocates a systems analysis process that employs cogent heuristics when using opinion, and tempers itself with a rational debate over the weight given subjective and empirical probabilities.

  14. Modified Core Wash Cytology: A reliable same day biopsy result for breast clinics.

    Bulte, J P; Wauters, C A P; Duijm, L E M; de Wilt, J H W; Strobbe, L J A


    Fine Needle Aspiration Biopsy (FNAB), Core Needle biopsy (CNB) and hybrid techniques including Core Wash Cytology (CWC) are available for same-day diagnosis in breast lesions. In CWC a washing of the biopsy core is processed for a provisional cytological diagnosis, after which the core is processed like a regular CNB. This study focuses on the reliability of CWC in daily practice. All consecutive CWC procedures performed in a referral breast centre between May 2009 and May 2012 were reviewed, correlating CWC results with the CNB result, definitive diagnosis after surgical resection and/or follow-up. Symptomatic as well as screen-detected lesions, undergoing CNB were included. 1253 CWC procedures were performed. Definitive histology showed 849 (68%) malignant and 404 (32%) benign lesions. 80% of CWC procedures yielded a conclusive diagnosis: this percentage was higher amongst malignant lesions and lower for benign lesions: 89% and 62% respectively. Sensitivity and specificity of a conclusive CWC result were respectively 98.3% and 90.4%. The eventual incidence of malignancy in the cytological 'atypical' group (5%) was similar to the cytological 'benign' group (6%). CWC can be used to make a reliable provisional diagnosis of breast lesions within the hour. The high probability of conclusive results in malignant lesions makes CWC well suited for high risk populations. Copyright © 2016 Elsevier Ltd, BASO ~ the Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  15. Reliability of photographic posture analysis of adolescents.

    Hazar, Zeynep; Karabicak, Gul Oznur; Tiftikci, Ugur


    [Purpose] Postural problems of adolescents needs to be evaluated accurately because they may lead to greater problems in the musculoskeletal system as they develop. Although photographic posture analysis has been frequently used, more simple and accessible methods are still needed. The purpose of this study was to investigate the inter- and intra-rater reliability of photographic posture analysis using MB-ruler software. [Subjects and Methods] Subjects were 30 adolescents (15 girls and 15 boys, mean age: 16.4±0.4 years, mean height 166.3±6.7 cm, mean weight 63.8±15.1 kg) and photographs of their habitual standing posture photographs were taken in the sagittal plane. For the evaluation of postural angles, reflective markers were placed on anatomical landmarks. For angular measurements, MB-ruler (Markus Bader- MB Software Solutions, triangular screen ruler) was used. Photographic evaluations were performed by two observers with a repetition after a week. Test-retest and inter-rater reliability evaluations were calculated using intra-class correlation coefficients (ICC). [Results] Inter-rater (ICC>0.972) and test-retest (ICC>0.774) reliability were found to be in the range of acceptable to excellent. [Conclusion] Reference angles for postural evaluation were found to be reliable and repeatable. The present method was found to be an easy and non-invasive method and it may be utilized by researchers who are in search of an alternative method for photographic postural assessments.

  16. Representative Sampling for reliable data analysis

    Petersen, Lars; Esbensen, Kim Harry


    The Theory of Sampling (TOS) provides a description of all errors involved in sampling of heterogeneous materials as well as all necessary tools for their evaluation, elimination and/or minimization. This tutorial elaborates on—and illustrates—selected central aspects of TOS. The theoretical...... regime in order to secure the necessary reliability of: samples (which must be representative, from the primary sampling onwards), analysis (which will not mean anything outside the miniscule analytical volume without representativity ruling all mass reductions involved, also in the laboratory) and data...

  17. Reliability Analysis of Adhesive Bonded Scarf Joints

    Kimiaeifar, Amin; Toft, Henrik Stensgaard; Lund, Erik;


    A probabilistic model for the reliability analysis of adhesive bonded scarfed lap joints subjected to static loading is developed. It is representative for the main laminate in a wind turbine blade subjected to flapwise bending. The structural analysis is based on a three dimensional (3D) finite...... the FEA model, and a sensitivity analysis on the influence of various geometrical parameters and material properties on the maximum stress is conducted. Because the yield behavior of many polymeric structural adhesives is dependent on both deviatoric and hydrostatic stress components, different ratios...... of the compressive to tensile adhesive yield stresses in the failure criterion are considered. It is shown that the chosen failure criterion, the scarf angle and the load are significant for the assessment of the probability of failure....


    Bowerman, P. N.


    RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for

  19. Integrated Reliability and Risk Analysis System (IRRAS)

    Russell, K D; McKay, M K; Sattison, M.B. Skinner, N.L.; Wood, S T [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rasmuson, D M [Nuclear Regulatory Commission, Washington, DC (United States)


    The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 4.0 and is the subject of this Reference Manual. Version 4.0 of IRRAS provides the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance.

  20. Advancing Usability Evaluation through Human Reliability Analysis

    Ronald L. Boring; David I. Gertman


    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probability of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.

  1. Reliability Analysis of Tubular Joints in Offshore Structures

    Thoft-Christensen, Palle; Sørensen, John Dalsgaard


    Reliability analysis of single tubular joints and offshore platforms with tubular joints is" presented. The failure modes considered are yielding, punching, buckling and fatigue failure. Element reliability as well as systems reliability approaches are used and illustrated by several examples....... Finally, optimal design of tubular.joints with reliability constraints is discussed and illustrated by an example....

  2. Software Architecture Reliability Analysis using Failure Scenarios

    Tekinerdogan, B.; Sözer, Hasan; Aksit, Mehmet

    With the increasing size and complexity of software in embedded systems, software has now become a primary threat for the reliability. Several mature conventional reliability engineering techniques exist in literature but traditionally these have primarily addressed failures in hardware components


    Ronald L. Boring; David I. Gertman; Katya Le Blanc


    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  4. Human Reliability Analysis for Small Modular Reactors

    Ronald L. Boring; David I. Gertman


    Because no human reliability analysis (HRA) method was specifically developed for small modular reactors (SMRs), the application of any current HRA method to SMRs represents tradeoffs. A first- generation HRA method like THERP provides clearly defined activity types, but these activity types do not map to the human-system interface or concept of operations confronting SMR operators. A second- generation HRA method like ATHEANA is flexible enough to be used for SMR applications, but there is currently insufficient guidance for the analyst, requiring considerably more first-of-a-kind analyses and extensive SMR expertise in order to complete a quality HRA. Although no current HRA method is optimized to SMRs, it is possible to use existing HRA methods to identify errors, incorporate them as human failure events in the probabilistic risk assessment (PRA), and quantify them. In this paper, we provided preliminary guidance to assist the human reliability analyst and reviewer in understanding how to apply current HRA methods to the domain of SMRs. While it is possible to perform a satisfactory HRA using existing HRA methods, ultimately it is desirable to formally incorporate SMR considerations into the methods. This may require the development of new HRA methods. More practicably, existing methods need to be adapted to incorporate SMRs. Such adaptations may take the form of guidance on the complex mapping between conventional light water reactors and small modular reactors. While many behaviors and activities are shared between current plants and SMRs, the methods must adapt if they are to perform a valid and accurate analysis of plant personnel performance in SMRs.

  5. The development and reliability of a simple field based screening tool to assess core stability in athletes.

    O'Connor, S; McCaffrey, N; Whyte, E; Moran, K


    To adapt the trunk stability test to facilitate further sub-classification of higher levels of core stability in athletes for use as a screening tool. To establish the inter-tester and intra-tester reliability of this adapted core stability test. Reliability study. Collegiate athletic therapy facilities. Fifteen physically active male subjects (19.46 ± 0.63) free from any orthopaedic or neurological disorders were recruited from a convenience sample of collegiate students. The intraclass correlation coefficients (ICC) and 95% Confidence Intervals (CI) were computed to establish inter-tester and intra-tester reliability. Excellent ICC values were observed in the adapted core stability test for inter-tester reliability (0.97) and good to excellent intra-tester reliability (0.73-0.90). While the 95% CI were narrow for inter-tester reliability, Tester A and C 95% CI's were widely distributed compared to Tester B. The adapted core stability test developed in this study is a quick and simple field based test to administer that can further subdivide athletes with high levels of core stability. The test demonstrated high inter-tester and intra-tester reliability. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. [Qualitative analysis: theory, steps and reliability].

    Minayo, Maria Cecília de Souza


    This essay seeks to conduct in-depth analysis of qualitative research, based on benchmark authors and the author's own experience. The hypothesis is that in order for an analysis to be considered reliable, it needs to be based on structuring terms of qualitative research, namely the verbs 'comprehend' and 'interpret', and the nouns 'experience', 'common sense' and 'social action'. The 10 steps begin with the construction of the scientific object by its inclusion on the national and international agenda; the development of tools that make the theoretical concepts tangible; conducting field work that involves the researcher empathetically with the participants in the use of various techniques and approaches, making it possible to build relationships, observations and a narrative with perspective. Finally, the author deals with the analysis proper, showing how the object, which has already been studied in all the previous steps, should become a second-order construct, in which the logic of the actors in their diversity and not merely their speech predominates. The final report must be a theoretic, contextual, concise and clear narrative.

  7. Task Decomposition in Human Reliability Analysis

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory


    In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.

  8. Reliability analysis of an associated system

    陈长杰; 魏一鸣; 蔡嗣经


    Based on engineering reliability of large complex system and distinct characteristic of soft system, some new conception and theory on the medium elements and the associated system are created. At the same time, the reliability logic model of associated system is provided. In this paper, through the field investigation of the trial operation, the engineering reliability of the paste fill system in No.2 mine of Jinchuan Non-ferrous Metallic Corporation is analyzed by using the theory of associated system.

  9. Sensitivity Analysis for the System Reliability Function


    reliabilities. The unique feature of the approach is that stunple data collected on K inde-ndent replications using a specified component reliability % v &:•r...Carlo method. The polynomial time algorithm of Agrawaw Pad Satyanarayana (104) fIr the exact reliability computaton for seres- allel systems exemplifies...consideration. As an example for the s-t connectedness problem, let denote -7- edge-disjoint minimal s-t paths of G and let V , denote edge-disjoint

  10. Substituing supplementary subtests for core subtests on reliability of WISC-IV Indexes and Full Scale IQ.

    Ryan, Joseph J; Glass, Laura A


    The effects of replacing core subtests with supplementary subtests on composite score reliabilities were evaluated for the WISC-IV Indexes and Full Scale IQ. When Wechsler's guidelines are followed, i.e., only one substitution for each Index; no more than two substitutions from different Indexes when assessing the Full Scale IQ, summary score reliabilities remain high, and measurement error, as defined by confidence intervals around obtained scores, never increases by more than 1 index score point. In three instances, substitution of a supplementary subtest for a core subtest actually increased the reliabilities and decreased the amount of associated measurement error.

  11. A Novel Two-Terminal Reliability Analysis for MANET

    Xibin Zhao; Zhiyang You; Hai Wan


    Mobile ad hoc network (MANET) is a dynamic wireless communication network. Because of the dynamic and infrastructureless characteristics, MANET is vulnerable in reliability. This paper presents a novel reliability analysis for MANET. The node mobility effect and the node reliability based on a real MANET platform are modeled and analyzed. An effective Monte Carlo method for reliability analysis is proposed. A detailed evaluation is performed in terms of the experiment results.

  12. A Novel Two-Terminal Reliability Analysis for MANET

    Xibin Zhao


    Full Text Available Mobile ad hoc network (MANET is a dynamic wireless communication network. Because of the dynamic and infrastructureless characteristics, MANET is vulnerable in reliability. This paper presents a novel reliability analysis for MANET. The node mobility effect and the node reliability based on a real MANET platform are modeled and analyzed. An effective Monte Carlo method for reliability analysis is proposed. A detailed evaluation is performed in terms of the experiment results.

  13. Solving reliability analysis problems in the polar space

    Ghasem Ezzati; Musa Mammadov; Siddhivinayak Kulkarni


    An optimization model that is widely used in engineering problems is Reliability-Based Design Optimization (RBDO). Input data of the RBDO is non-deterministic and constraints are probabilistic. The RBDO aims at minimizing cost ensuring that reliability is at least an accepted level. Reliability analysis is an important step in two-level RBDO approaches. Although many methods have been introduced to apply in reliability analysis loop of the RBDO, there are still many drawbacks in their efficie...

  14. Reliability Analysis and Optimal Design of Monolithic Vertical Wall Breakwaters

    Sørensen, John Dalsgaard; Burcharth, Hans F.; Christiani, E.


    Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of the most important failure modes, sliding failure, failure of the foundation and overturning failure are described . Relevant design variables are identified and relia......Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of the most important failure modes, sliding failure, failure of the foundation and overturning failure are described . Relevant design variables are identified...

  15. Degraded core analysis for the PWR

    Gittus, J.H.


    The paper presents an analysis of the probability and consequences of degraded core accidents for the PWR. The article is based on a paper which was presented by the author to the Sizewell-B public inquiry. Degraded core accidents are examined with respect to:- the initiating events, safety plant failure, and processes with a bearing on containment failure. Accident types and frequencies are discussed, as well as the dispersion of radionuclides. Accident risks, i.e. individual and societal risks in degraded core accidents are assessed from:- the amount of radionuclides released, the weather, the population distribution, and the accident frequencies. Uncertainties in the assessment of degraded core accidents are also summarized. (U.K.).

  16. Overview on Hydrate Coring, Handling and Analysis

    Jon Burger; Deepak Gupta; Patrick Jacobs; John Shillinglaw


    Gas hydrates are crystalline, ice-like compounds of gas and water molecules that are formed under certain thermodynamic conditions. Hydrate deposits occur naturally within ocean sediments just below the sea floor at temperatures and pressures existing below about 500 meters water depth. Gas hydrate is also stable in conjunction with the permafrost in the Arctic. Most marine gas hydrate is formed of microbially generated gas. It binds huge amounts of methane into the sediments. Worldwide, gas hydrate is estimated to hold about 1016 kg of organic carbon in the form of methane (Kvenvolden et al., 1993). Gas hydrate is one of the fossil fuel resources that is yet untapped, but may play a major role in meeting the energy challenge of this century. In June 2002, Westport Technology Center was requested by the Department of Energy (DOE) to prepare a ''Best Practices Manual on Gas Hydrate Coring, Handling and Analysis'' under Award No. DE-FC26-02NT41327. The scope of the task was specifically targeted for coring sediments with hydrates in Alaska, the Gulf of Mexico (GOM) and from the present Ocean Drilling Program (ODP) drillship. The specific subjects under this scope were defined in 3 stages as follows: Stage 1: Collect information on coring sediments with hydrates, core handling, core preservation, sample transportation, analysis of the core, and long term preservation. Stage 2: Provide copies of the first draft to a list of experts and stakeholders designated by DOE. Stage 3: Produce a second draft of the manual with benefit of input from external review for delivery. The manual provides an overview of existing information available in the published literature and reports on coring, analysis, preservation and transport of gas hydrates for laboratory analysis as of June 2003. The manual was delivered as draft version 3 to the DOE Project Manager for distribution in July 2003. This Final Report is provided for records purposes.

  17. Reliability in Cross-National Content Analysis.

    Peter, Jochen; Lauf, Edmund


    Investigates how coder characteristics such as language skills, political knowledge, coding experience, and coding certainty affected inter-coder and coder-training reliability. Shows that language skills influenced both reliability types. Suggests that cross-national researchers should pay more attention to cross-national assessments of…

  18. Software architecture reliability analysis using failure scenarios

    Tekinerdogan, Bedir; Sozer, Hasan; Aksit, Mehmet


    With the increasing size and complexity of software in embedded systems, software has now become a primary threat for the reliability. Several mature conventional reliability engineering techniques exist in literature but traditionally these have primarily addressed failures in hardware components a

  19. Software reliability experiments data analysis and investigation

    Walker, J. Leslie; Caglayan, Alper K.


    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  20. Reliability Analysis of Slope Stability by Central Point Method

    Li, Chunge; WU Congliang


    Given uncertainty and variability of the slope stability analysis parameter, the paper proceed from the perspective of probability theory and statistics based on the reliability theory. Through the central point method of reliability analysis, performance function about the reliability of slope stability analysis is established. What’s more, the central point method and conventional limit equilibrium methods do comparative analysis by calculation example. The approach’s numerical ...

  1. Individual Differences in Human Reliability Analysis

    Jeffrey C. Joe; Ronald L. Boring


    While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research has shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.


    周锐; 李珍; 宋兵; 谢昕; 李贞; 陆岸青


    对长江三角洲顶部湖沼相沉积为主的X J02孔(32°19 ′57″N,119°16 ′22″E)进行XRF全岩芯扫描(XRF-cps)分析,同时选取70个样品利用X射线荧光光谱仪(XRF-ppm)进行元素含量测定.通过对两种测试方法所获结果的对比及相关性分析,探讨了XRF岩芯扫描方法测定元素含量的可靠性及其影响因素.结果表明:两种方法所测得的6种元素Ti,Zn,Rb,Sr,Fe和Ca的相关系数很高,XRF扫描强度变化可以很好地反映其含量的变化;XRF扫描强度受含水量的影响,尤其对原子量较小的元素Si等影响比较明显;对于含量较低的元素P和Pb,XRF扫描强度不能真实的反映沉积物中的元素含量.%X-ray fluorescence(XRF)core scanning is increasingly accepted as an effective method for high-resolution elemental records because of its rapid,non-destructive and continuous measurements.In order to know the reliability of XRF core scanning,we compare the measurement results from core scanning method with those results from the well-acceptable traditional X-ray fluorescence spectrometer.The materials we used are the sediments of core X J02 (32 ° 19′57″N,119°16′22″E,core length 31.8m) from the Yangtze River delta.The core sediments is mainly consist of silt and clay.We measure the whole core using XRF core scanner in Tongji University.At the same time,70 samples (intervals of 45cm) from the core XJ02 are selected for element measurement by traditional X-ray fluorescence spectrometer.Comparing the results of twelve detectable elements from two methods,and carrying out a correlation analysis on each element from the XRF core scanner and the conventional X-ray fluorescence spectrometer,we group three types of elements base on the correlation coefficients (R2):1) Type one includes Ti,Zn,Rb,Sr,Fe and Ca with high correlation (R2 >0.8) ; 2) Type two includes Zr,Al and K that were weaker correlated (0.5 <R2 <0.8) ; 3)Type three includes Pb,Si and P,which are


    Popescu V.S.


    Full Text Available Power distribution systems are basic parts of power systems and reliability of these systems at present is a key issue for power engineering development and requires special attention. Operation of distribution systems is accompanied by a number of factors that produce random data a large number of unplanned interruptions. Research has shown that the predominant factors that have a significant influence on the reliability of distribution systems are: weather conditions (39.7%, defects in equipment(25% and unknown random factors (20.1%. In the article is studied the influence of random behavior and are presented estimations of reliability of predominantly rural electrical distribution systems.

  4. Reliability Analysis on English Writing Test of SHSEE in Shanghai

    黄玉麒; 黄芳


    As a subjective test, the validity of writing test is acceptable. What about the reliability? Writing test occupies a special position in the senior high school entrance examination (SHSEE for short). It is important to ensure its reliability. By the analysis of recent years’English writing items in SHSEE, the author offer suggestions on how to guarantee the reliability of writing tests.

  5. Reliability and properties of core materials for all-ceramic dental restorations

    Seiji Ban


    Full Text Available Various core materials have been used as all-ceramic dental restorations. Since many foreign zirconia product systems were introduced to the Japanese dental market in the past few years, the researches and the papers on zirconia for ceramic biomaterials have immediately drawn considerable attention. Recently, most of the manufactures supply zirconia blocks available to multi-unit posterior bridges using CAD/CAM, because zirconia has excellent mechanical properties comparable to metal, due to its microstructures. The properties of conventional zirconia were further improved by the composite in nano-scale such as zirconia/alumina nanocomposite (NANOZR. There are many interesting behaviors such as long-term stability related to low temperature degradation, effect of sandblasting and heat treatment on the microstructure and the strength, bonding to veneering porcelains, bonding to cement, visible light translucency related to esthetic restoration, X-ray opacity, biocompatibility, fracture load of clinical bridge as well as lifetime and clinical survival rates of the restoratives made with zirconia. From the recent material researches on zirconia not only in Japan but also in the world, this review takes into account these interesting properties of zirconia and reliability as core material for all-ceramic dental restorations.

  6. Analysis on Some of Software Reliability Models


    Software reliability & maintainability evaluation tool (SRMET 3.0) is introducted in detail in this paper,which was developed by Software Evaluation and Test Center of China Aerospace Mechanical Corporation. SRMET 3.0is supported by seven soft ware reliability models and four software maintainability models. Numerical characteristicsfor all those models are deeply studied in this paper, and corresponding numerical algorithms for each model are alsogiven in the paper.

  7. CORE

    Krigslund, Jeppe; Hansen, Jonas; Hundebøll, Martin


    different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

  8. CORE

    Krigslund, Jeppe; Hansen, Jonas; Hundebøll, Martin


    different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

  9. System reliability analysis for kinematic performance of planar mechanisms

    ZHANG YiMin; HUANG XianZhen; ZHANG XuFang; HE XiangDong; WEN BangChun


    Based on the reliability and mechanism kinematic accuracy theories, we propose a general methodology for system reliability analysis of kinematic performance of planar mechanisms. The loop closure equations are used to estimate the kinematic performance errors of planar mechanisms. Reliability and system reliability theories are introduced to develop the limit state functions (LSF) for failure of kinematic performance qualities. The statistical fourth moment method and the Edgeworth series technique are used on system reliability analysis for kinematic performance of planar mechanisms, which relax the restrictions of probability distribution of design variables. Finally, the practicality, efficiency and accuracy of the proposed method are demonstrated by numerical examples.

  10. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    Ronald Laurids Boring


    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  11. CFD Analysis of Core Bypass Phenomena

    Richard W. Johnson; Hiroyuki Sato; Richard R. Schultz


    The U.S. Department of Energy is exploring the potential for the VHTR which will be either of a prismatic or a pebble-bed type. One important design consideration for the reactor core of a prismatic VHTR is coolant bypass flow which occurs in the interstitial regions between fuel blocks. Such gaps are an inherent presence in the reactor core because of tolerances in manufacturing the blocks and the inexact nature of their installation. Furthermore, the geometry of the graphite blocks changes over the lifetime of the reactor because of thermal expansion and irradiation damage. The existence of the gaps induces a flow bias in the fuel blocks and results in unexpected increase of maximum fuel temperature. Traditionally, simplified methods such as flow network calculations employing experimental correlations are used to estimate flow and temperature distributions in the core design. However, the distribution of temperature in the fuel pins and graphite blocks as well as coolant outlet temperatures are strongly coupled with the local heat generation rate within fuel blocks which is not uniformly distributed in the core. Hence, it is crucial to establish mechanistic based methods which can be applied to the reactor core thermal hydraulic design and safety analysis. Computational Fluid Dynamics (CFD) codes, which have a capability of local physics based simulation, are widely used in various industrial fields. This study investigates core bypass flow phenomena with the assistance of commercial CFD codes and establishes a baseline for evaluation methods. A one-twelfth sector of the hexagonal block surface is modeled and extruded down to whole core length of 10.704m. The computational domain is divided vertically with an upper reflector, a fuel section and a lower reflector. Each side of the sector grid can be set as a symmetry boundary

  12. CFD Analysis of Core Bypass Phenomena

    Richard W. Johnson; Hiroyuki Sato; Richard R. Schultz


    The U.S. Department of Energy is exploring the potential for the VHTR which will be either of a prismatic or a pebble-bed type. One important design consideration for the reactor core of a prismatic VHTR is coolant bypass flow which occurs in the interstitial regions between fuel blocks. Such gaps are an inherent presence in the reactor core because of tolerances in manufacturing the blocks and the inexact nature of their installation. Furthermore, the geometry of the graphite blocks changes over the lifetime of the reactor because of thermal expansion and irradiation damage. The existence of the gaps induces a flow bias in the fuel blocks and results in unexpected increase of maximum fuel temperature. Traditionally, simplified methods such as flow network calculations employing experimental correlations are used to estimate flow and temperature distributions in the core design. However, the distribution of temperature in the fuel pins and graphite blocks as well as coolant outlet temperatures are strongly coupled with the local heat generation rate within fuel blocks which is not uniformly distributed in the core. Hence, it is crucial to establish mechanistic based methods which can be applied to the reactor core thermal hydraulic design and safety analysis. Computational Fluid Dynamics (CFD) codes, which have a capability of local physics based simulation, are widely used in various industrial fields. This study investigates core bypass flow phenomena with the assistance of commercial CFD codes and establishes a baseline for evaluation methods. A one-twelfth sector of the hexagonal block surface is modeled and extruded down to whole core length of 10.704m. The computational domain is divided vertically with an upper reflector, a fuel section and a lower reflector. Each side of the one-twelfth grid can be set as a symmetry boundary

  13. Analysis on testing and operational reliability of software

    ZHAO Jing; LIU Hong-wei; CUI Gang; WANG Hui-qiang


    Software reliability was estimated based on NHPP software reliability growth models. Testing reliability and operational reliability may be essentially different. On the basis of analyzing similarities and differences of the testing phase and the operational phase, using the concept of operational reliability and the testing reliability, different forms of the comparison between the operational failure ratio and the predicted testing failure ratio were conducted, and the mathematical discussion and analysis were performed in detail. Finally, software optimal release was studied using software failure data. The results show that two kinds of conclusions can be derived by applying this method, one conclusion is to continue testing to meet the required reliability level of users, and the other is that testing stops when the required operational reliability is met, thus the testing cost can be reduced.

  14. Reliability estimation in a multilevel confirmatory factor analysis framework.

    Geldhof, G John; Preacher, Kristopher J; Zyphur, Michael J


    Scales with varying degrees of measurement reliability are often used in the context of multistage sampling, where variance exists at multiple levels of analysis (e.g., individual and group). Because methodological guidance on assessing and reporting reliability at multiple levels of analysis is currently lacking, we discuss the importance of examining level-specific reliability. We present a simulation study and an applied example showing different methods for estimating multilevel reliability using multilevel confirmatory factor analysis and provide supporting Mplus program code. We conclude that (a) single-level estimates will not reflect a scale's actual reliability unless reliability is identical at each level of analysis, (b) 2-level alpha and composite reliability (omega) perform relatively well in most settings, (c) estimates of maximal reliability (H) were more biased when estimated using multilevel data than either alpha or omega, and (d) small cluster size can lead to overestimates of reliability at the between level of analysis. We also show that Monte Carlo confidence intervals and Bayesian credible intervals closely reflect the sampling distribution of reliability estimates under most conditions. We discuss the estimation of credible intervals using Mplus and provide R code for computing Monte Carlo confidence intervals.

  15. Validity and reliability of the proposed core competency for infection control nurses of hospitals in Hong Kong.

    Chan, Wai Fong; Adamson, Bob; Chung, Joanne W Y; Chow, Meyrick C M


    Literature review and the Delphi approach were used to draft the core competency items of hospital infection control nurses in Hong Kong. Content validity, internal consistency, and test-retest reliability of the proposed core competency were ensured. The result serves as the foundation of developing training and assessment tools for infection control nurses in Hong Kong. Copyright © 2011 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  16. Mechanical reliability analysis of tubes intended for hydrocarbons

    Nahal, Mourad; Khelif, Rabia [Badji Mokhtar University, Annaba (Algeria)


    Reliability analysis constitutes an essential phase in any study concerning reliability. Many industrialists evaluate and improve the reliability of their products during the development cycle - from design to startup (design, manufacture, and exploitation) - to develop their knowledge on cost/reliability ratio and to control sources of failure. In this study, we obtain results for hardness, tensile, and hydrostatic tests carried out on steel tubes for transporting hydrocarbons followed by statistical analysis. Results obtained allow us to conduct a reliability study based on resistance request. Thus, index of reliability is calculated and the importance of the variables related to the tube is presented. Reliability-based assessment of residual stress effects is applied to underground pipelines under a roadway, with and without active corrosion. Residual stress has been found to greatly increase probability of failure, especially in the early stages of pipe lifetime.

  17. Analysis of Reliability of CET Band4



    CET Band 4 has been carried out for more than a decade. It becomes so large- scaled, so popular and so influential that many testing experts and foreign language teachers are willing to do research on it. In this paper, I will mainly analyse its reliability from the perspective of writing test and speaking test.

  18. Bypassing BDD Construction for Reliability Analysis

    Williams, Poul Frederick; Nikolskaia, Macha; Rauzy, Antoine


    In this note, we propose a Boolean Expression Diagram (BED)-based algorithm to compute the minimal p-cuts of boolean reliability models such as fault trees. BEDs make it possible to bypass the Binary Decision Diagram (BDD) construction, which is the main cost of fault tree assessment....

  19. Reliability Analysis of an Offshore Structure

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Rackwitz, R.


    A jacket type offshore structure from the North Sea is considered. The time variant reliability is estimated for failure defined as brittie fradure and crack through the tubular roerober walls. The stochastic modeiling is described. The hot spot stress speetral moments as fundion of the stochastic...

  20. Reliability sensitivity-based correlation coefficient calculation in structural reliability analysis

    Yang, Zhou; Zhang, Yimin; Zhang, Xufang; Huang, Xianzhen


    The correlation coefficients of random variables of mechanical structures are generally chosen with experience or even ignored, which cannot actually reflect the effects of parameter uncertainties on reliability. To discuss the selection problem of the correlation coefficients from the reliability-based sensitivity point of view, the theory principle of the problem is established based on the results of the reliability sensitivity, and the criterion of correlation among random variables is shown. The values of the correlation coefficients are obtained according to the proposed principle and the reliability sensitivity problem is discussed. Numerical studies have shown the following results: (1) If the sensitivity value of correlation coefficient ρ is less than (at what magnitude 0.000 01), then the correlation could be ignored, which could simplify the procedure without introducing additional error. (2) However, as the difference between ρ s, that is the most sensitive to the reliability, and ρ R , that is with the smallest reliability, is less than 0.001, ρ s is suggested to model the dependency of random variables. This could ensure the robust quality of system without the loss of safety requirement. (3) In the case of | E abs|>0.001 and also | E rel|>0.001, ρ R should be employed to quantify the correlation among random variables in order to ensure the accuracy of reliability analysis. Application of the proposed approach could provide a practical routine for mechanical design and manufactory to study the reliability and reliability-based sensitivity of basic design variables in mechanical reliability analysis and design.

  1. Reliability analysis of ceramic matrix composite laminates

    Thomas, David J.; Wetherhold, Robert C.


    At a macroscopic level, a composite lamina may be considered as a homogeneous orthotropic solid whose directional strengths are random variables. Incorporation of these random variable strengths into failure models, either interactive or non-interactive, allows for the evaluation of the lamina reliability under a given stress state. Using a non-interactive criterion for demonstration purposes, laminate reliabilities are calculated assuming previously established load sharing rules for the redistribution of load as the failure of laminae occur. The matrix cracking predicted by ACK theory is modeled to allow a loss of stiffness in the fiber direction. The subsequent failure in the fiber direction is controlled by a modified bundle theory. Results using this modified bundle model are compared with previous models which did not permit separate consideration of matrix cracking, as well as to results obtained from experimental data.

  2. DFTCalc: Reliability centered maintenance via fault tree analysis (tool paper)

    Guck, Dennis; Spel, Jip; Stoelinga, Mariëlle Ida Antoinette; Butler, Michael; Conchon, Sylvain; Zaïdi, Fatiha


    Reliability, availability, maintenance and safety (RAMS) analysis is essential in the evaluation of safety critical systems like nuclear power plants and the railway infrastructure. A widely used methodology within RAMS analysis are fault trees, representing failure propagations throughout a system.

  3. DFTCalc: reliability centered maintenance via fault tree analysis (tool paper)

    Guck, Dennis; Spel, Jip; Stoelinga, Mariëlle; Butler, Michael; Conchon, Sylvain; Zaïdi, Fatiha


    Reliability, availability, maintenance and safety (RAMS) analysis is essential in the evaluation of safety critical systems like nuclear power plants and the railway infrastructure. A widely used methodology within RAMS analysis are fault trees, representing failure propagations throughout a system.


    Gaguk Margono


    Full Text Available The purpose of this paper is to compare unidimensional reliability and multidimensional reliability of instrument students’ satisfaction as an internal costumer. Multidimensional reliability measurement is rarely used in the field of research. Multidimensional reliability is estimated by using Confirmatory Factor Analysis (CFA on the Structural Equation Model (SEM. Measurements and calculations are described in this article using instrument students’ satisfaction as an internal costumer. Survey method used in this study and sampling used simple random sampling. This instrument has been tried out to 173 students. The result is concluded that the measuringinstrument of students’ satisfaction as an internal costumer by using multidimensional reliability coefficient has higher accuracy when compared with a unidimensional reliability coefficient. Expected in advanced research used another formula multidimensional reliability, including when using SEM.

  5. Reliability analysis of PLC safety equipment

    Yu, J.; Kim, J. Y. [Chungnam Nat. Univ., Daejeon (Korea, Republic of)


    FMEA analysis for Nuclear Safety Grade PLC, failure rate prediction for nuclear safety grade PLC, sensitivity analysis for components failure rate of nuclear safety grade PLC, unavailability analysis support for nuclear safety system.

  6. Earth slope reliability analysis under seismic loadings using neural network

    PENG Huai-sheng; DENG Jian; GU De-sheng


    A new method was proposed to cope with the earth slope reliability problem under seismic loadings. The algorithm integrates the concepts of artificial neural network, the first order second moment reliability method and the deterministic stability analysis method of earth slope. The performance function and its derivatives in slope stability analysis under seismic loadings were approximated by a trained multi-layer feed-forward neural network with differentiable transfer functions. The statistical moments calculated from the performance function values and the corresponding gradients using neural network were then used in the first order second moment method for the calculation of the reliability index in slope safety analysis. Two earth slope examples were presented for illustrating the applicability of the proposed approach. The new method is effective in slope reliability analysis. And it has potential application to other reliability problems of complicated engineering structure with a considerably large number of random variables.

  7. Design and Analysis for Reliability of Wireless Sensor Network

    Yongxian Song


    Full Text Available Reliability is an important performance indicator of wireless sensor network, to some application fields, which have high demands in terms of reliability, it is particularly important to ensure reliability of network. At present, the reliability research findings of wireless sensor network are much more at home and abroad, but they mainly improve network reliability from the networks topology, reliable protocol and application layer fault correction and so on, and reliability of network is comprehensive considered from hardware and software aspects is much less. This paper adopts bionic hardware to implement bionic reconfigurable of wireless sensor network nodes, so as to the nodes have able to change their structure and behavior autonomously and dynamically, in the cases of the part hardware are failure, and the nodes can realize bionic self-healing. Secondly, Markov state diagram and probability analysis method are adopted to realize solution of functional model for reliability, establish the relationship between reliability and characteristic parameters for sink nodes, analyze sink nodes reliability model, so as to determine the reasonable parameters of the model and ensure reliability of sink nodes.

  8. Reliability-Analysis of Offshore Structures using Directional Loads

    Sørensen, John Dalsgaard; Bloch, Allan; Sterndorff, M. J.


    Reliability analyses of offshore structures such as steel jacket platforms are usually performed using stochastic models for the wave loads based on the omnidirectional wave height. However, reliability analyses with respect to structural failure modes such as total collapse of a structure...... heights from the central part of the North Sea. It is described how the stochastic model for the directional wave heights can be used in a reliability analysis where total collapse of offshore steel jacket platforms is considered....

  9. Statistical analysis on reliability and serviceability of caterpillar tractor

    WANG Jinwu; LIU Jiafu; XU Zhongxiang


    For further understanding reliability and serviceability of tractor and to furnish scientific and technical theories, based on the promotion and application of it, the following experiments and statistical analysis on reliability (reliability and MTBF) serviceability (service and MTTR) of Donfanghong-1002 and Dongfanghong-802 were conducted. The result showed that the intervals of average troubles of these two tractors were 182.62 h and 160.2 h, respectively, and the weakest assembly of them was engine part.

  10. Modified Bayesian Kriging for Noisy Response Problems for Reliability Analysis


    surrogate model is used to do the MCS prediction for the reliability analysis for the sampling- based reliability-based design optimization ( RBDO ) method...D., Choi, K. K., Noh, Y., & Zhao, L. (2011). Sampling-based stochastic sensitivity analysis using score functions for RBDO problems with correlated...K., and Zhao, L., (2011). Sampling- based RBDO using the stochastic sensitivity analysis and dynamic Kriging method. Structural and

  11. Reliability analysis of large, complex systems using ASSIST

    Johnson, Sally C.


    The SURE reliability analysis program is discussed as well as the ASSIST model generation program. It is found that semi-Markov modeling using model reduction strategies with the ASSIST program can be used to accurately solve problems at least as complex as other reliability analysis tools can solve. Moreover, semi-Markov analysis provides the flexibility needed for modeling realistic fault-tolerant systems.

  12. Evaluating some Reliability Analysis Methodologies in Seismic Design

    A. E. Ghoulbzouri


    Full Text Available Problem statement: Accounting for uncertainties that are present in geometric and material data of reinforced concrete buildings is performed in this study within the context of performance based seismic engineering design. Approach: Reliability of the expected performance state is assessed by using various methodologies based on finite element nonlinear static pushover analysis and specialized reliability software package. Reliability approaches that were considered included full coupling with an external finite element code and surface response based methods in conjunction with either first order reliability method or importance sampling method. Various types of probability distribution functions that model parameters uncertainties were introduced. Results: The probability of failure according to the used reliability analysis method and to the selected distribution of probabilities was obtained. Convergence analysis of the importance sampling method was performed. The required duration of analysis as function of the used reliability method was evaluated. Conclusion/Recommendations: It was found that reliability results are sensitive to the used reliability analysis method and to the selected distribution of probabilities. Durations of analysis for coupling methods were found to be higher than those associated to surface response based methods; one should however include time needed to derive these lasts. For the reinforced concrete building considered in this study, it was found that significant variations exist between all the considered reliability methodologies. The full coupled importance sampling method is recommended, but the first order reliability method applied on a surface response model can be used with good accuracy. Finally, the distributions of probabilities should be carefully identified since giving the mean and the standard deviation were found to be insufficient.

  13. Reliability Distribution of Numerical Control Lathe Based on Correlation Analysis

    Xiaoyan Qi; Guixiang Shen; Yingzhi Zhang; Shuguang Sun; Bingkun Chen


    Combined Reliability distribution with correlation analysis, a new method has been proposed to make Reliability distribution where considering the elements about structure correlation and failure correlation of subsystems. Firstly, we make a sequence for subsystems by means of TOPSIS which comprehends the considerations of Reliability allocation, and introducing a Copula connecting function to set up a distribution model based on structure correlation, failure correlation and target correlation, and then acquiring reliability target area of all subsystems by Matlab. In this method, not only the traditional distribution considerations are concerned, but also correlation influences are involved, to achieve supplementing information and optimizing distribution.

  14. Validity and reliability of the Iranian version of the Pediatric Quality of Life Inventory™ 4.0 (PedsQL™ Generic Core Scales in children

    Amiri Parisa


    Full Text Available Abstract Background This study aimed to investigate the reliability and validity of the Iranian version of the Pediatric Quality of Life Inventory™ 4.0 (PedsQL™ 4.0 Generic Core Scales in children. Methods A standard forward and backward translation procedure was used to translate the US English version of the PedsQL™ 4.0 Generic Core Scales for children into the Iranian language (Persian. The Iranian version of the PedsQL™ 4.0 Generic Core Scales was completed by 503 healthy and 22 chronically ill children aged 8-12 years and their parents. The reliability was evaluated using internal consistency. Known-groups discriminant comparisons were made, and exploratory factor analysis (EFA and confirmatory factor analysis (CFA were conducted. Results The internal consistency, as measured by Cronbach's alpha coefficients, exceeded the minimum reliability standard of 0.70. All monotrait-multimethod correlations were higher than multitrait-multimethod correlations. The intraclass correlation coefficients (ICC between the children self-report and parent proxy-reports showed moderate to high agreement. Exploratory factor analysis extracted six factors from the PedsQL™ 4.0 for both self and proxy reports, accounting for 47.9% and 54.8% of total variance, respectively. The results of the confirmatory factor analysis for 6-factor models for both self-report and proxy-report indicated acceptable fit for the proposed models. Regarding health status, as hypothesized from previous studies, healthy children reported significantly higher health-related quality of life than those with chronic illnesses. Conclusions The findings support the initial reliability and validity of the Iranian version of the PedsQL™ 4.0 as a generic instrument to measure health-related quality of life of children in Iran.

  15. Reliability and safety analysis of redundant vehicle management computer system

    Shi Jian; Meng Yixuan; Wang Shaoping; Bian Mengmeng; Yan Dungong


    Redundant techniques are widely adopted in vehicle management computer (VMC) to ensure that VMC has high reliability and safety. At the same time, it makes VMC have special char-acteristics, e.g., failure correlation, event simultaneity, and failure self-recovery. Accordingly, the reliability and safety analysis to redundant VMC system (RVMCS) becomes more difficult. Aimed at the difficulties in RVMCS reliability modeling, this paper adopts generalized stochastic Petri nets to establish the reliability and safety models of RVMCS. Then this paper analyzes RVMCS oper-ating states and potential threats to flight control system. It is verified by simulation that the reli-ability of VMC is not the product of hardware reliability and software reliability, and the interactions between hardware and software faults can reduce the real reliability of VMC obviously. Furthermore, the failure undetected states and false alarming states inevitably exist in RVMCS due to the influences of limited fault monitoring coverage and false alarming probability of fault mon-itoring devices (FMD). RVMCS operating in some failure undetected states will produce fatal threats to the safety of flight control system. RVMCS operating in some false alarming states will reduce utility of RVMCS obviously. The results abstracted in this paper can guide reliable VMC and efficient FMD designs. The methods adopted in this paper can also be used to analyze other intelligent systems’ reliability.

  16. Seismic reliability analysis of large electric power systems

    何军; 李杰


    Based on the De. Morgan laws and Boolean simplification, a recursive decomposition method is introduced in this paper to identity the main exclusive safe paths and failed paths of a network. The reliability or the reliability bound of a network can be conveniently expressed as the summation of the joint probabilities of these paths. Under the multivariate normal distribution assumption, a conditioned reliability index method is developed to evaluate joint probabilities of various exclusive safe paths and failed paths, and, finally, the seismic reliability or the reliability bound of an electric power system.Examples given in thc paper show that the method is very simple and provides accurate results in the seismic reliability analysis.

  17. Simulation Approach to Mission Risk and Reliability Analysis Project

    National Aeronautics and Space Administration — It is proposed to develop and demonstrate an integrated total-system risk and reliability analysis approach that is based on dynamic, probabilistic simulation. This...

  18. Reliability analysis of ship structure system with multi-defects


    This paper analyzes the influence of multi-defects including the initial distortions,welding residual stresses,cracks and local dents on the ultimate strength of the plate element,and has worked out expressions of reliability calculation and sensitivity analysis of the plate element.Reliability analysis is made for the system with multi-defects plate elements.Failure mechanism,failure paths and the calculating approach to global reliability index are also worked out.After plate elements with multi-defects fail,the formula of reverse node forces which affect the residual structure is deduced,so are the sensitivity expressions of the system reliability index.This ensures calculating accuracy and rationality for reliability analysis,and makes it convenient to find weakness plate elements which affect the reliability of the structure system.Finally,for the validity of the approach proposed,we take the numerical example of a ship cabin to compare and contrast the reliability and the sensitivity analysis of the structure system with multi-defects with those of the structure system with no defects.The approach has implications for the structure design,rational maintenance and renewing strategy.

  19. The Stress and Reliability Analysis of HTR’s Graphite Component

    Xiang Fang


    Full Text Available The high temperature gas cooled reactor (HTR is developing rapidly toward a modular, compact, and integral direction. As the main structure material, graphite plays a very important role in HTR engineering, and the reliability of graphite component has a close relationship with the integrity of reactor core. The graphite components are subjected to high temperature and fast neutron irradiation simultaneously during normal operation of the reactor. With the stress accumulation induced by high temperature and irradiation, the failure risk of graphite components increases constantly. Therefore it is necessary to study and simulate the mechanical behavior of graphite component under in-core working conditions and forecast the internal stress accumulation history and the variation of reliability. The work of this paper focuses on the mechanical analysis of pebble-bed type HTR's graphite brick. The analysis process is comprised of two procedures, stress analysis and reliability analysis. Three different creep models and two different reliability models are reviewed and taken into account in simulation. The stress and failure probability calculation results are obtained and discussed. The results gained with various models are highly consistent, and the discrepancies are acceptable.

  20. Requalification of offshore structures. Reliability analysis of platform

    Bloch, A.; Dalsgaard Soerensen, J. [Aalborg Univ. (Denmark)


    A preliminary reliability analysis has been performed for an example platform. In order to model the structural response such that it is possible to calculate reliability indices, approximate quadratic response surfaces have been determined for cross-sectional forces. Based on a deterministic, code-based analysis the elements and joints which can be expected to be the most critical are selected and response surfaces are established for the cross-sectional forces in those. A stochastic model is established for the uncertain variables. The reliability analysis shows that with this stochastic model the smallest reliability indices for elements are about 3.9. The reliability index for collapse (pushover) is estimated to 6.7 and the reliability index for fatigue failure using a crude model is for the expected most critical detail estimated to 3.2, corresponding to the accumulated damage during the design lifetime of the platform. These reliability indices are considered to be reasonable compared with values recommended by e.g. ISO. The most important stochastic variables are found to be the wave height and the drag coefficient (including the model uncertainty related to estimation of wave forces on the platform). (au)

  1. Effects of core-to-dentin thickness ratio on the biaxial flexural strength, reliability, and fracture mode of bilayered materials of zirconia core (Y-TZP) and veneer indirect composite resins.

    Su, Naichuan; Liao, Yunmao; Zhang, Hai; Yue, Li; Lu, Xiaowen; Shen, Jiefei; Wang, Hang


    Indirect composite resins (ICR) are promising alternatives as veneering materials for zirconia frameworks. The effects of core-to-dentin thickness ratio (C/Dtr) on the mechanical property of bilayered veneer ICR/yttria-tetragonal zirconia polycrystalline (Y-TZP) core disks have not been previously studied. The purpose of this in vitro study was to assess the effects of C/Dtr on the biaxial flexural strength, reliability, and fracture mode of bilayered veneer ICR/ Y-TZP core disks. A total of 180 bilayered 0.6-mm-thick composite resin disks in core material and C/Dtr of 2:1, 1:1, and 1:2 were tested with either core material placed up or placed down for piston-on-3-ball biaxial flexural strength. The mean biaxial flexural strength, Weibull modulus, and fracture mode were measured to evaluate the variation trend of the biaxial flexural strength, reliability, and fracture mode of the bilayered disks with various C/Dtr. One-way analysis of variance (ANOVA) and chi-square tests were used to evaluate the variation tendency of fracture mode with the C/Dtr or material placed down during testing (α=.05). Light microscopy was used to identify the fracture mode. The mean biaxial flexural strength and reliability improved with the increase in C/Dtr when specimens were tested with the core material either up and down, and depended on the materials that were placed down during testing. The rates of delamination, Hertzian cone cracks, subcritical radial cracks, and number of fracture fragments partially depended on the C/Dtr and the materials that were placed down during testing. The biaxial flexural strength, reliability, and fracture mode in bilayered structures of Y-TZP core and veneer ICR depend on both the C/Dtr and the material that was placed down during testing. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  2. Maritime shipping as a high reliability industry: A qualitative analysis

    Mannarelli, T.; Roberts, K.; Bea, R.


    The maritime oil shipping industry has great public demands for safe and reliable organizational performance. Researchers have identified a set of organizations and industries that operate at extremely high levels of reliability, and have labelled them High Reliability Organizations (HRO). Following the Exxon Valdez oil spill disaster of 1989, public demands for HRO-level operations were placed on the oil industry. It will be demonstrated that, despite enormous improvements in safety and reliability, maritime shipping is not operating as an HRO industry. An analysis of the organizational, environmental, and cultural history of the oil industry will help to provide justification and explanation. The oil industry will be contrasted with other HRO industries and the differences will inform the shortfalls maritime shipping experiences with regard to maximizing reliability. Finally, possible solutions for the achievement of HRO status will be offered.

  3. Reliability Analysis of OMEGA Network and Its Variants

    Suman Lata


    Full Text Available The performance of a computer system depends directly on the time required to perform a basic operation and the number of these basic operations that can be performed concurrently. High performance computing systems can be designed using parallel processing. Parallel processing is achieved by using more than one processors or computers together they communicate with each other to solve a givenproblem. MINs provide better way for the communication between different processors or memory modules with less complexity, fast communication, good fault tolerance, high reliability and low cost. Reliability of a system is the probability that it will successfully perform its intended operations for a given time under stated operating conditions. From the reliability analysis it has beenobserved that addition of one stage to Omega networks provide higher reliability in terms of terminal reliability than the addition of two stages in the corresponding network.

  4. Discrete event simulation versus conventional system reliability analysis approaches

    Kozine, Igor


    Discrete Event Simulation (DES) environments are rapidly developing and appear to be promising tools for building reliability and risk analysis models of safety-critical systems and human operators. If properly developed, they are an alternative to the conventional human reliability analysis models...... and systems analysis methods such as fault and event trees and Bayesian networks. As one part, the paper describes briefly the author’s experience in applying DES models to the analysis of safety-critical systems in different domains. The other part of the paper is devoted to comparing conventional approaches...

  5. Seismic reliability analysis of urban water distribution network

    Li Jie; Wei Shulin; Liu Wei


    An approach to analyze the seismic reliability of water distribution networks by combining a hydraulic analysis with a first-order reliability method (FORM), is proposed in this paper.The hydraulic analysis method for normal conditions is modified to accommodate the special conditions necessary to perform a seismic hydraulic analysis. In order to calculate the leakage area and leaking flow of the pipelines in the hydraulic analysis method, a new leakage model established from the seismic response analysis of buried pipelines is presented. To validate the proposed approach, a network with 17 nodes and 24 pipelines is investigated in detail. The approach is also applied to an actual project consisting of 463 nodes and 767pipelines. Thee results show that the proposed approach achieves satisfactory results in analyzing the seismic reliability of large-scale water distribution networks.

  6. A Passive System Reliability Analysis for a Station Blackout

    Brunett, Acacia; Bucknor, Matthew; Grabaskas, David; Sofu, Tanju; Grelle, Austin


    The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passive system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.

  7. Reliability Analysis of Dynamic Stability in Waves

    Søborg, Anders Veldt


    exhibit sufficient characteristics with respect to slope at zero heel (GM value), maximum leverarm, positive range of stability and area below the leverarm curve. The rule-based requirements to calm water leverarm curves are entirely based on experience obtained from vessels in operation and recorded......-4 per ship year such brute force Monte-Carlo simulations are not always feasible due to the required computational resources. Previous studies of dynamic stability of ships in waves typically focused on the capsizing event. In this study the objective is to establish a procedure that can identify...... the distribution of the exceedance probability may be established by an estimation of the out-crossing rate of the "safe set" defined by the utility function. This out-crossing rate will be established using the so-called Madsen's Formula. A bi-product of this analysis is a set of short wave time series...

  8. Reliability Analysis of Fatigue Fracture of Wind Turbine Drivetrain Components

    Berzonskis, Arvydas; Sørensen, John Dalsgaard


    in the volume of the casted ductile iron main shaft, on the reliability of the component. The probabilistic reliability analysis conducted is based on fracture mechanics models. Additionally, the utilization of the probabilistic reliability for operation and maintenance planning and quality control is discussed....... of operation and maintenance. The manufacturing of casted drivetrain components, like the main shaft of the wind turbine, commonly result in many smaller defects through the volume of the component with sizes that depend on the manufacturing method. This paper considers the effect of the initial defect present...

  9. Analysis on Operation Reliability of Generating Units in 2009



    This paper presents the data on operation reliability indices and relevant analyses toward China's conventional power generating units in 2009. The units brought into the statistical analysis include 100-MW or above thermal generating units, 40-MW or above hydro generating units, and all nuclear generating units. The reliability indices embodied include utilization hours, times and hours of scheduled outages, times and hours of unscheduled outages, equivalent forced outage rate and equivalent availability factor.

  10. Core Backbone Convergence Mechanisms and Microloops Analysis

    Abdelali Ala


    Full Text Available In this article we study approaches that can be used to minimise the convergence time, we also make a focus on microloops phenomenon, analysis and means to mitigate them. The convergence time reflects the time required by a network to react to a failure of a link or a router failure itself. When all nodes (routers have updated their respective routing and forwarding databases, we can say the network has converged. This study will help in building real-time and resilient network infrastructure, the goal is to make any evenement in the core network, as transparent as possible to any sensitive and real-time flows. This study is also, a deepening of earlier works presented in [10] and [11].

  11. Determination of power distribution in the VVER-440 core on the basis of data from in-core monitors by means of a metric analysis

    Kryanev, A. V.; Udumyan, D. K.; Kurchenkov, A. Yu.; Gagarinskiy, A. A.


    Problems associated with determining the power distribution in the VVER-440 core on the basis of a neutron-physics calculation and data from in-core monitors are considered. A new mathematical scheme is proposed for this on the basis of a metric analysis. In relation to the existing mathematical schemes, the scheme in question improves the accuracy and reliability of the resulting power distribution.

  12. Reliability analysis and initial requirements for FC systems and stacks

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  13. Coverage Modeling and Reliability Analysis Using Multi-state Function


    Fault tree analysis is an effective method for predicting the reliability of a system. It gives a pictorial representation and logical framework for analyzing the reliability. Also, it has been used for a long time as an effective method for the quantitative and qualitative analysis of the failure modes of critical systems. In this paper, we propose a new general coverage model (GCM) based on hardware independent faults. Using this model, an effective software tool can be constructed to detect, locate and recover fault from the faulty system. This model can be applied to identify the key component that can cause the failure of the system using failure mode effect analysis (FMEA).

  14. Reliability analysis of flood defence systems in the Netherlands

    Lassing, B.L.; Vrouwenvelder, A.C.W.M.; Waarts, P.H.


    In recent years an advanced program for reliability analysis of dike systems has been under de-velopment in the Netherlands. This paper describes the global data requirements for application and the set-up of the models in the Netherlands. The analysis generates an estimate of the probability of sys

  15. Recent advances in computational structural reliability analysis methods

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.


    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  16. On reliability analysis of multi-categorical forecasts

    J. Bröcker


    Full Text Available Reliability analysis of probabilistic forecasts, in particular through the rank histogram or Talagrand diagram, is revisited. Two shortcomings are pointed out: Firstly, a uniform rank histogram is but a necessary condition for reliability. Secondly, if the forecast is assumed to be reliable, an indication is needed how far a histogram is expected to deviate from uniformity merely due to randomness. Concerning the first shortcoming, it is suggested that forecasts be grouped or stratified along suitable criteria, and that reliability is analyzed individually for each forecast stratum. A reliable forecast should have uniform histograms for all individual forecast strata, not only for all forecasts as a whole. As to the second shortcoming, instead of the observed frequencies, the probability of the observed frequency is plotted, providing and indication of the likelihood of the result under the hypothesis that the forecast is reliable. Furthermore, a Goodness-Of-Fit statistic is discussed which is essentially the reliability term of the Ignorance score. The discussed tools are applied to medium range forecasts for 2 m-temperature anomalies at several locations and lead times. The forecasts are stratified along the expected ranked probability score. Those forecasts which feature a high expected score turn out to be particularly unreliable.




    Full Text Available The introduction of pervasive devices and mobile devices has led to immense growth of real time distributed processing. In such context reliability of the computing environment is very important. Reliability is the probability that the devices, links, processes, programs and files work efficiently for the specified period of time and in the specified condition. Distributed systems are available as conventional ring networks, clusters and agent based systems. Reliability of such systems is focused. These networks are heterogeneous and scalable in nature. There are several factors, which are to be considered for reliability estimation. These include the application related factors like algorithms, data-set sizes, memory usage pattern, input-output, communication patterns, task granularity and load-balancing. It also includes the hardware related factors like processor architecture, memory hierarchy, input-output configuration and network. The software related factors concerning reliability are operating systems, compiler, communication protocols, libraries and preprocessor performance. In estimating the reliability of a system, the performance estimation is an important aspect. Reliability analysis is approached using probability.

  18. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Simon, Tyler; McGalliard, James


    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  19. Self-Healing Many-Core Architecture: Analysis and Evaluation

    Arezoo Kamran


    Full Text Available More pronounced aging effects, more frequent early-life failures, and incomplete testing and verification processes due to time-to-market pressure in new fabrication technologies impose reliability challenges on forthcoming systems. A promising solution to these reliability challenges is self-test and self-reconfiguration with no or limited external control. In this work a scalable self-test mechanism for periodic online testing of many-core processor has been proposed. This test mechanism facilitates autonomous detection and omission of faulty cores and makes graceful degradation of the many-core architecture possible. Several test components are incorporated in the many-core architecture that distribute test stimuli, suspend normal operation of individual processing cores, apply test, and detect faulty cores. Test is performed concurrently with the system normal operation without any noticeable downtime at the application level. Experimental results show that the proposed test architecture is extensively scalable in terms of hardware overhead and performance overhead that makes it applicable to many-cores with more than a thousand processing cores.

  20. The development of a reliable amateur boxing performance analysis template.

    Thomson, Edward; Lamb, Kevin; Nicholas, Ceri


    The aim of this study was to devise a valid performance analysis system for the assessment of the movement characteristics associated with competitive amateur boxing and assess its reliability using analysts of varying experience of the sport and performance analysis. Key performance indicators to characterise the demands of an amateur contest (offensive, defensive and feinting) were developed and notated using a computerised notational analysis system. Data were subjected to intra- and inter-observer reliability assessment using median sign tests and calculating the proportion of agreement within predetermined limits of error. For all performance indicators, intra-observer reliability revealed non-significant differences between observations (P > 0.05) and high agreement was established (80-100%) regardless of whether exact or the reference value of ±1 was applied. Inter-observer reliability was less impressive for both analysts (amateur boxer and experienced analyst), with the proportion of agreement ranging from 33-100%. Nonetheless, there was no systematic bias between observations for any indicator (P > 0.05), and the proportion of agreement within the reference range (±1) was 100%. A reliable performance analysis template has been developed for the assessment of amateur boxing performance and is available for use by researchers, coaches and athletes to classify and quantify the movement characteristics of amateur boxing.

  1. Reliability analysis of cluster-based ad-hoc networks

    Cook, Jason L. [Quality Engineering and System Assurance, Armament Research Development Engineering Center, Picatinny Arsenal, NJ (United States); Ramirez-Marquez, Jose Emmanuel [School of Systems and Enterprises, Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 (United States)], E-mail:


    The mobile ad-hoc wireless network (MAWN) is a new and emerging network scheme that is being employed in a variety of applications. The MAWN varies from traditional networks because it is a self-forming and dynamic network. The MAWN is free of infrastructure and, as such, only the mobile nodes comprise the network. Pairs of nodes communicate either directly or through other nodes. To do so, each node acts, in turn, as a source, destination, and relay of messages. The virtue of a MAWN is the flexibility this provides; however, the challenge for reliability analyses is also brought about by this unique feature. The variability and volatility of the MAWN configuration makes typical reliability methods (e.g. reliability block diagram) inappropriate because no single structure or configuration represents all manifestations of a MAWN. For this reason, new methods are being developed to analyze the reliability of this new networking technology. New published methods adapt to this feature by treating the configuration probabilistically or by inclusion of embedded mobility models. This paper joins both methods together and expands upon these works by modifying the problem formulation to address the reliability analysis of a cluster-based MAWN. The cluster-based MAWN is deployed in applications with constraints on networking resources such as bandwidth and energy. This paper presents the problem's formulation, a discussion of applicable reliability metrics for the MAWN, and illustration of a Monte Carlo simulation method through the analysis of several example networks.

  2. Reliability Analysis of Wireless Sensor Networks Using Markovian Model

    Jin Zhu


    Full Text Available This paper investigates reliability analysis of wireless sensor networks whose topology is switching among possible connections which are governed by a Markovian chain. We give the quantized relations between network topology, data acquisition rate, nodes' calculation ability, and network reliability. By applying Lyapunov method, sufficient conditions of network reliability are proposed for such topology switching networks with constant or varying data acquisition rate. With the conditions satisfied, the quantity of data transported over wireless network node will not exceed node capacity such that reliability is ensured. Our theoretical work helps to provide a deeper understanding of real-world wireless sensor networks, which may find its application in the fields of network design and topology control.

  3. Reliability of the Emergency Severity Index: Meta-analysis

    Amir Mirhaghi


    Full Text Available Objectives: Although triage systems based on the Emergency Severity Index (ESI have many advantages in terms of simplicity and clarity, previous research has questioned their reliability in practice. Therefore, the aim of this meta-analysis was to determine the reliability of ESI triage scales. Methods: This metaanalysis was performed in March 2014. Electronic research databases were searched and articles conforming to the Guidelines for Reporting Reliability and Agreement Studies were selected. Two researchers independently examined selected abstracts. Data were extracted in the following categories: version of scale (latest/older, participants (adult/paediatric, raters (nurse, physician or expert, method of reliability (intra/inter-rater, reliability statistics (weighted/unweighted kappa and the origin and publication year of the study. The effect size was obtained by the Z-transformation of reliability coefficients. Data were pooled with random-effects models and a meta-regression was performed based on the method of moments estimator. Results: A total of 19 studies from six countries were included in the analysis. The pooled coefficient for the ESI triage scales was substantial at 0.791 (95% confidence interval: 0.787‒0.795. Agreement was higher with the latest and adult versions of the scale and among expert raters, compared to agreement with older and paediatric versions of the scales and with other groups of raters, respectively. Conclusion: ESI triage scales showed an acceptable level of overall reliability. However, ESI scales require more development in order to see full agreement from all rater groups. Further studies concentrating on other aspects of reliability assessment are needed.

  4. Statistical models and methods for reliability and survival analysis

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo


    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  5. Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis

    Dezfuli, Homayoon; Kelly, Dana; Smith, Curtis; Vedros, Kurt; Galyean, William


    This document, Bayesian Inference for NASA Probabilistic Risk and Reliability Analysis, is intended to provide guidelines for the collection and evaluation of risk and reliability-related data. It is aimed at scientists and engineers familiar with risk and reliability methods and provides a hands-on approach to the investigation and application of a variety of risk and reliability data assessment methods, tools, and techniques. This document provides both: A broad perspective on data analysis collection and evaluation issues. A narrow focus on the methods to implement a comprehensive information repository. The topics addressed herein cover the fundamentals of how data and information are to be used in risk and reliability analysis models and their potential role in decision making. Understanding these topics is essential to attaining a risk informed decision making environment that is being sought by NASA requirements and procedures such as 8000.4 (Agency Risk Management Procedural Requirements), NPR 8705.05 (Probabilistic Risk Assessment Procedures for NASA Programs and Projects), and the System Safety requirements of NPR 8715.3 (NASA General Safety Program Requirements).

  6. Notes on numerical reliability of several statistical analysis programs

    Landwehr, J.M.; Tasker, Gary D.


    This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.

  7. Distribution System Reliability Analysis for Smart Grid Applications

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  8. Reliability analysis of retaining walls with multiple failure modes

    张道兵; 孙志彬; 朱川曲


    In order to reduce the errors of the reliability of the retaining wall structure in the establishment of function, in the estimation of parameter and algorithm, firstly, two new reliability and stability models of anti-slipping and anti-overturning based on the upper-bound theory of limit analysis were established, and two kinds of failure modes were regarded as a series of systems with multiple correlated failure modes. Then, statistical characteristics of parameters of the retaining wall structure were inferred by maximal entropy principle. At last, the structural reliabilities of single failure mode and multiple failure modes were calculated by Monte Carlo method in MATLAB and the results were compared and analyzed on the sensitivity. It indicates that this method, with a high precision, is not only easy to program and quick in calculation, but also without the limit of nonlinear functions and non-normal random variables. And the results calculated by this method which applies both the limit analysis theory, maximal entropy principle and Monte Carlo method into analyzing the reliability of the retaining wall structures is more scientific, accurate and reliable, in comparison with those calculated by traditional method.

  9. Reliability Analysis of a Green Roof Under Different Storm Scenarios

    William, R. K.; Stillwell, A. S.


    Urban environments continue to face the challenges of localized flooding and decreased water quality brought on by the increasing amount of impervious area in the built environment. Green infrastructure provides an alternative to conventional storm sewer design by using natural processes to filter and store stormwater at its source. However, there are currently few consistent standards available in North America to ensure that installed green infrastructure is performing as expected. This analysis offers a method for characterizing green roof failure using a visual aid commonly used in earthquake engineering: fragility curves. We adapted the concept of the fragility curve based on the efficiency in runoff reduction provided by a green roof compared to a conventional roof under different storm scenarios. We then used the 2D distributed surface water-groundwater coupled model MIKE SHE to model the impact that a real green roof might have on runoff in different storm events. We then employed a multiple regression analysis to generate an algebraic demand model that was input into the Matlab-based reliability analysis model FERUM, which was then used to calculate the probability of failure. The use of reliability analysis as a part of green infrastructure design code can provide insights into green roof weaknesses and areas for improvement. It also supports the design of code that is more resilient than current standards and is easily testable for failure. Finally, the understanding of reliability of a single green roof module under different scenarios can support holistic testing of system reliability.

  10. Semigroup Method for a Mathematical Model in Reliability Analysis

    Geni Gupur; LI Xue-zhi


    The system which consists of a reliable machine, an unreliable machine and a storage buffer with infinite many workpieces has been studied. The existence of a unique positive time-dependent solution of the model corresponding to the system has been obtained by using C0-semigroup theory of linear operators in functional analysis.

  11. Reliability-Based Robustness Analysis for a Croatian Sports Hall

    Čizmar, Dean; Kirkegaard, Poul Henning; Sørensen, John Dalsgaard


    . A complex timber structure with a large number of failure modes is modelled with only a few dominant failure modes. First, a component based robustness analysis is performed based on the reliability indices of the remaining elements after the removal of selected critical elements. The robustness...

  12. Reliability-Based Robustness Analysis for a Croatian Sports Hall

    Čizmar, Dean; Kirkegaard, Poul Henning; Sørensen, John Dalsgaard;


    This paper presents a probabilistic approach for structural robustness assessment for a timber structure built a few years ago. The robustness analysis is based on a structural reliability based framework for robustness and a simplified mechanical system modelling of a timber truss system. A comp...

  13. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Rabiti, Cristian; Alfonsi, Andrea; Huang, Dongli; Gleicher, Frederick; Wang, Bei; Adbel-Khalik, Hany S.; Pascucci, Valerio; Smith, Curtis L.


    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  14. Test-retest reliability of trunk accelerometric gait analysis

    Henriksen, Marius; Lund, Hans; Moe-Nilssen, R


    The purpose of this study was to determine the test-retest reliability of a trunk accelerometric gait analysis in healthy subjects. Accelerations were measured during walking using a triaxial accelerometer mounted on the lumbar spine of the subjects. Six men and 14 women (mean age 35.2; range 18...

  15. Human Reliability Analysis for Digital Human-Machine Interfaces

    Ronald L. Boring


    This paper addresses the fact that existing human reliability analysis (HRA) methods do not provide guidance on digital human-machine interfaces (HMIs). Digital HMIs are becoming ubiquitous in nuclear power operations, whether through control room modernization or new-build control rooms. Legacy analog technologies like instrumentation and control (I&C) systems are costly to support, and vendors no longer develop or support analog technology, which is considered technologically obsolete. Yet, despite the inevitability of digital HMI, no current HRA method provides guidance on how to treat human reliability considerations for digital technologies.

  16. Classification using least squares support vector machine for reliability analysis

    Zhi-wei GUO; Guang-chen BAI


    In order to improve the efficiency of the support vector machine (SVM) for classification to deal with a large amount of samples,the least squares support vector machine (LSSVM) for classification methods is introduced into the reliability analysis.To reduce the computational cost,the solution of the SVM is transformed from a quadratic programming to a group of linear equations.The numerical results indicate that the reliability method based on the LSSVM for classification has higher accuracy and requires less computational cost than the SVM method.

  17. Accident Sequence Evaluation Program: Human reliability analysis procedure

    Swain, A.D.


    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.

  18. Strength Reliability Analysis of Turbine Blade Using Surrogate Models

    Wei Duan


    Full Text Available There are many stochastic parameters that have an effect on the reliability of steam turbine blades performance in practical operation. In order to improve the reliability of blade design, it is necessary to take these stochastic parameters into account. In this study, a variable cross-section twisted blade is investigated and geometrical parameters, material parameters and load parameters are considered as random variables. A reliability analysis method as a combination of a Finite Element Method (FEM, a surrogate model and Monte Carlo Simulation (MCS, is applied to solve the blade reliability analysis. Based on the blade finite element parametrical model and the experimental design, two kinds of surrogate models, Polynomial Response Surface (PRS and Artificial Neural Network (ANN, are applied to construct the approximation analytical expressions between the blade responses (including maximum stress and deflection and random input variables, which act as a surrogate of finite element solver to drastically reduce the number of simulations required. Then the surrogate is used for most of the samples needed in the Monte Carlo method and the statistical parameters and cumulative distribution functions of the maximum stress and deflection are obtained by Monte Carlo simulation. Finally, the probabilistic sensitivities analysis, which combines the magnitude of the gradient and the width of the scatter range of the random input variables, is applied to evaluate how much the maximum stress and deflection of the blade are influenced by the random nature of input parameters.

  19. Generating function approach to reliability analysis of structural systems


    The generating function approach is an important tool for performance assessment in multi-state systems. Aiming at strength reliability analysis of structural systems, generating function approach is introduced and developed. Static reliability models of statically determinate, indeterminate systems and fatigue reliability models are built by constructing special generating functions, which are used to describe probability distributions of strength (resistance), stress (load) and fatigue life, by defining composite operators of generating functions and performance structure functions thereof. When composition operators are executed, computational costs can be reduced by a big margin by means of collecting like terms. The results of theoretical analysis and numerical simulation show that the generating function approach can be widely used for probability modeling of large complex systems with hierarchical structures due to the unified form, compact expression, computer program realizability and high universality. Because the new method considers twin loads giving rise to component failure dependency, it can provide a theoretical reference and act as a powerful tool for static, dynamic reliability analysis in civil engineering structures and mechanical equipment systems with multi-mode damage coupling.

  20. Identifying Sources of Difference in Reliability in Content Analysis

    Elizabeth Murphy


    Full Text Available This paper reports on a case study which identifies and illustrates sources of difference in agreement in relation to reliability in a context of quantitative content analysis of a transcript of an online asynchronous discussion (OAD. Transcripts of 10 students in a month-long online asynchronous discussion were coded by two coders using an instrument with two categories, five processes, and 19 indicators of Problem Formulation and Resolution (PFR. Sources of difference were identified in relation to: coders; tasks; and students. Reliability values were calculated at the levels of categories, processes, and indicators. At the most detailed level of coding on the basis of the indicator, findings revealed that the overall level of reliability between coders was .591 when measured with Cohen’s kappa. The difference between tasks at the same level ranged from .349 to .664, and the difference between participants ranged from .390 to .907. Implications for training and research are discussed.

  1. Reliability Analysis of Free Jet Scour Below Dams

    Chuanqi Li


    Full Text Available Current formulas for calculating scour depth below of a free over fall are mostly deterministic in nature and do not adequately consider the uncertainties of various scouring parameters. A reliability-based assessment of scour, taking into account uncertainties of parameters and coefficients involved, should be performed. This paper studies the reliability of a dam foundation under the threat of scour. A model for calculating the reliability of scour and estimating the probability of failure of the dam foundation subjected to scour is presented. The Maximum Entropy Method is applied to construct the probability density function (PDF of the performance function subject to the moment constraints. Monte Carlo simulation (MCS is applied for uncertainty analysis. An example is considered, and there liability of its scour is computed, the influence of various random variables on the probability failure is analyzed.

  2. Modeling and Analysis of Component Faults and Reliability

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter;


    that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates.......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...

  3. Reliability analysis of two unit parallel repairable industrial system

    Mohit Kumar Kakkar


    Full Text Available The aim of this work is to present a reliability and profit analysis of a two-dissimilar parallel unit system under the assumption that operative unit cannot fail after post repair inspection and replacement and there is only one repair facility. Failure and repair times of each unit are assumed to be uncorrelated. Using regenerative point technique various reliability characteristics are obtained which are useful to system designers and industrial managers. Graphical behaviors of mean time to system failure (MTSF and profit function have also been studied. In this paper, some important measures of reliability characteristics of a two non-identical unit standby system model with repair, inspection and post repair are obtained using regenerative point technique.

  4. Preliminaries on core image analysis using fault drilling samples; Core image kaiseki kotohajime (danso kussaku core kaisekirei)

    Miyazaki, T.; Ito, H. [Geological Survey of Japan, Tsukuba (Japan)


    This paper introduces examples of image data analysis on fault drilling samples. The paper describes the following matters: core samples used in the analysis are those obtained from wells drilled piercing the Nojima fault which has moved in the Hygoken-Nanbu Earthquake; the CORESCAN system made by DMT Corporation, Germany, used in acquiring the image data consists of a CCD camera, a light source and core rotation mechanism, and a personal computer, its resolution being about 5 pixels/mm in both axial and circumferential directions, and 24-bit full color; with respect to the opening fractures in core samples collected by using a constant azimuth coring, it was possible to derive values of the opening width, inclination angle, and travel from the image data by using a commercially available software for the personal computer; and comparison of this core image with the BHTV record and the hydrophone VSP record (travel and inclination obtained from the BHTV record agree well with those obtained from the core image). 4 refs., 4 figs.

  5. Hybrid Analysis of Engine Core Noise

    O'Brien, Jeffrey; Kim, Jeonglae; Ihme, Matthias


    Core noise, or the noise generated within an aircraft engine, is becoming an increasing concern for the aviation industry as other noise sources are progressively reduced. The prediction of core noise generation and propagation is especially challenging for computationalists since it involves extensive multiphysics including chemical reaction and moving blades in addition to the aerothermochemical effects of heated jets. In this work, a representative engine flow path is constructed using experimentally verified geometries to simulate the physics of core noise. A combustor, single-stage turbine, nozzle and jet are modeled in separate calculations using appropriate high fidelity techniques including LES, actuator disk theory and Ffowcs-Williams Hawkings surfaces. A one way coupling procedure is developed for passing fluctuations downstream through the flowpath. This method effectively isolates the core noise from other acoustic sources, enables straightforward study of the interaction between core noise and jet exhaust, and allows for simple distinction between direct and indirect noise. The impact of core noise on the farfield jet acoustics is studied extensively and the relative efficiency of different disturbance types and shapes is examined in detail.

  6. Analysis of the Reliability of the "Alternator- Alternator Belt" System

    Ivan Mavrin


    Full Text Available Before starting and also during the exploitation of va1ioussystems, it is vety imp011ant to know how the system and itsparts will behave during operation regarding breakdowns, i.e.failures. It is possible to predict the service behaviour of a systemby determining the functions of reliability, as well as frequencyand intensity of failures.The paper considers the theoretical basics of the functionsof reliability, frequency and intensity of failures for the twomain approaches. One includes 6 equal intetvals and the other13 unequal intetvals for the concrete case taken from practice.The reliability of the "alternator- alternator belt" system installedin the buses, has been analysed, according to the empiricaldata on failures.The empitical data on failures provide empirical functionsof reliability and frequency and intensity of failures, that arepresented in tables and graphically. The first analysis perfO!med by dividing the mean time between failures into 6 equaltime intervals has given the forms of empirical functions of fa ilurefrequency and intensity that approximately cotTespond totypical functions. By dividing the failure phase into 13 unequalintetvals with two failures in each interval, these functions indicateexplicit transitions from early failure inte1val into the randomfailure interval, i.e. into the ageing intetval. Functions thusobtained are more accurate and represent a better solution forthe given case.In order to estimate reliability of these systems with greateraccuracy, a greater number of failures needs to be analysed.

  7. Reliability and maintainability analysis of electrical system of drum shearers

    SEYED Hadi Hoseinie; MOHAMMAD Ataei; REZA Khalokakaie; UDAY Kumar


    The reliability and maintainability of electrical system of drum shearer at Parvade.l Coal Mine in central Iran was analyzed. The maintenance and failure data were collected during 19 months of shearer operation. According to trend and serial correlation tests, the data were independent and identically distributed (iid) and therefore the statistical techniques were used for modeling. The data analysis show that the time between failures (TBF) and time to repair (TTR) data obey the lognormal and Weibull 3 parameters distribution respectively. Reliability-based preventive maintenance time intervals for electrical system of the drum shearer were calculated with regard to reliability plot. The reliability-based maintenance intervals for 90%, 80%, 70% and 50% reliability level are respectively 9.91, 17.96, 27.56 and 56.1 h. Also the calculations show that time to repair (TTR) of this system varies in range 0.17-4 h with 1.002 h as mean time to repair (MTTR). There is a 80% chance that the electrical system of shearer of Parvade.l mine repair will be accomplished within 1.45 h.

  8. Reliability analysis method for slope stability based on sample weight

    Zhi-gang YANG


    Full Text Available The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM, may not include some important information, especially for steep slopes with complex geological conditions. This paper presents a new reliability method that uses sample weight analysis. Based on the distribution characteristics of random variables, the minimal sample size of every random variable is extracted according to a small sample t-distribution under a certain expected value, and the weight coefficient of each extracted sample is considered to be its contribution to the random variables. Then, the weight coefficients of the random sample combinations are determined using the Bayes formula, and different sample combinations are taken as the input for slope stability analysis. According to one-to-one mapping between the input sample combination and the output safety coefficient, the reliability index of slope stability can be obtained with the multiplication principle. Slope stability analysis of the left bank of the Baihetan Project is used as an example, and the analysis results show that the present method is reasonable and practicable for the reliability analysis of steep slopes with complex geological conditions.

  9. Semantic Web for Reliable Citation Analysis in Scholarly Publishing

    Ruben Tous


    Full Text Available Analysis of the impact of scholarly artifacts is constrained by current unreliable practices in cross-referencing, citation discovering, and citation indexing and analysis, which have not kept pace with the technological advances that are occurring in several areas like knowledge management and security. Because citation analysis has become the primary component in scholarly impact factor calculation, and considering the relevance of this metric within both the scholarly publishing value chain and (especially important the professional curriculum evaluation of scholarly professionals, we defend that current practices need to be revised. This paper describes a reference architecture that aims to provide openness and reliability to the citation-tracking lifecycle. The solution relies on the use of digitally signed semantic metadata in the different stages of the scholarly publishing workflow in such a manner that authors, publishers, repositories, and citation-analysis systems will have access to independent reliable evidences that are resistant to forgery, impersonation, and repudiation. As far as we know, this is the first paper to combine Semantic Web technologies and public-key cryptography to achieve reliable citation analysis in scholarly publishing

  10. Advanced Materials and Solids Analysis Research Core (AMSARC)

    The Advanced Materials and Solids Analysis Research Core (AMSARC), centered at the U.S. Environmental Protection Agency's (EPA) Andrew W. Breidenbach Environmental Research Center in Cincinnati, Ohio, is the foundation for the Agency's solids and surfaces analysis capabilities. ...

  11. Advanced Materials and Solids Analysis Research Core (AMSARC)

    The Advanced Materials and Solids Analysis Research Core (AMSARC), centered at the U.S. Environmental Protection Agency's (EPA) Andrew W. Breidenbach Environmental Research Center in Cincinnati, Ohio, is the foundation for the Agency's solids and surfaces analysis capabilities. ...

  12. Reliability test and failure analysis of high power LED packages*

    Chen Zhaohui; Zhang Qin; Wang Kai; Luo Xiaobing; Liu Sheng


    A new type application specific light emitting diode (LED) package (ASLP) with freeform polycarbonate lens for street lighting is developed, whose manufacturing processes are compatible with a typical LED packaging process. The reliability test methods and failure criterions from different vendors are reviewed and compared. It is found that test methods and failure criterions are quite different. The rapid reliability assessment standards are urgently needed for the LED industry. 85 ℃/85 RH with 700 mA is used to test our LED modules with three other vendors for 1000 h, showing no visible degradation in optical performance for our modules, with two other vendors showing significant degradation. Some failure analysis methods such as C-SAM, Nano X-ray CT and optical microscope are used for LED packages. Some failure mechanisms such as delaminations and cracks are detected in the LED packages after the accelerated reliability testing. The finite element simulation method is helpful for the failure analysis and design of the reliability of the LED packaging. One example is used to show one currently used module in industry is vulnerable and may not easily pass the harsh thermal cycle testing.

  13. Molecular double core-hole electron spectroscopy for chemical analysis

    Tashiro, Motomichi; Fukuzawa, Hironobu; Ueda, Kiyoshi; Buth, Christian; Kryzhevoi, Nikolai V; Cederbaum, Lorenz S


    We explore the potential of double core hole electron spectroscopy for chemical analysis in terms of x-ray two-photon photoelectron spectroscopy (XTPPS). The creation of deep single and double core vacancies induces significant reorganization of valence electrons. The corresponding relaxation energies and the interatomic relaxation energies are evaluated by CASSCF calculations. We propose a method how to experimentally extract these quantities by the measurement of single and double core-hole ionization potentials (IPs and DIPs). The influence of the chemical environment on these DIPs is also discussed for states with two holes at the same atomic site and states with two holes at two different atomic sites. Electron density difference between the ground and double core-hole states clearly shows the relaxations accompanying the double core-hole ionization. The effect is also compared with the sensitivity of single core hole ionization potentials (IPs) arising in single core hole electron spectroscopy. We have ...

  14. Human reliability analysis of the Tehran research reactor using the SPAR-H method

    Barati Ramin


    Full Text Available The purpose of this paper is to cover human reliability analysis of the Tehran research reactor using an appropriate method for the representation of human failure probabilities. In the present work, the technique for human error rate prediction and standardized plant analysis risk-human reliability methods have been utilized to quantify different categories of human errors, applied extensively to nuclear power plants. Human reliability analysis is, indeed, an integral and significant part of probabilistic safety analysis studies, without it probabilistic safety analysis would not be a systematic and complete representation of actual plant risks. In addition, possible human errors in research reactors constitute a significant part of the associated risk of such installations and including them in a probabilistic safety analysis for such facilities is a complicated issue. Standardized plant analysis risk-human can be used to address these concerns; it is a well-documented and systematic human reliability analysis system with tables for human performance choices prepared in consultation with experts in the domain. In this method, performance shaping factors are selected via tables, human action dependencies are accounted for, and the method is well designed for the intended use. In this study, in consultations with reactor operators, human errors are identified and adequate performance shaping factors are assigned to produce proper human failure probabilities. Our importance analysis has revealed that human action contained in the possibility of an external object falling on the reactor core are the most significant human errors concerning the Tehran research reactor to be considered in reactor emergency operating procedures and operator training programs aimed at improving reactor safety.

  15. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.


    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  16. Conceptual study of advanced PWR core design. Development of advanced PWR core neutronics analysis system

    Kim, Chang Hyo; Kim, Seung Cho; Kim, Taek Kyum; Cho, Jin Young; Lee, Hyun Cheol; Lee, Jung Hun; Jung, Gu Young [Seoul National University, Seoul (Korea, Republic of)


    The neutronics design system of the advanced PWR consists of (i) hexagonal cell and fuel assembly code for generation of homogenized few-group cross sections and (ii) global core neutronics analysis code for computations of steady-state pin-wise or assembly-wise core power distribution, core reactivity with fuel burnup, control rod worth and reactivity coefficients, transient core power, etc.. The major research target of the first year is to establish the numerical method and solution of multi-group diffusion equations for neutronics code development. Specifically, the following studies are planned; (i) Formulation of various numerical methods such as finite element method(FEM), analytical nodal method(ANM), analytic function expansion nodal(AFEN) method, polynomial expansion nodal(PEN) method that can be applicable for the hexagonal core geometry. (ii) Comparative evaluation of the numerical effectiveness of these methods based on numerical solutions to various hexagonal core neutronics benchmark problems. Results are follows: (i) Formulation of numerical solutions to multi-group diffusion equations based on numerical methods. (ii) Numerical computations by above methods for the hexagonal neutronics benchmark problems such as -VVER-1000 Problem Without Reflector -VVER-440 Problem I With Reflector -Modified IAEA PWR Problem Without Reflector -Modified IAEA PWR Problem With Reflector -ANL Large Heavy Water Reactor Problem -Small HTGR Problem -VVER-440 Problem II With Reactor (iii) Comparative evaluation on the numerical effectiveness of various numerical methods. (iv) Development of HEXFEM code, a multi-dimensional hexagonal core neutronics analysis code based on FEM. In the target year of this research, the spatial neutronics analysis code for hexagonal core geometry(called NEMSNAP-H temporarily) will be completed. Combination of NEMSNAP-H with hexagonal cell and assembly code will then equip us with hexagonal core neutronics design system. (Abstract Truncated)

  17. Fatigue Reliability Analysis of a Mono-Tower Platform

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune


    In this paper, a fatigue reliability analysis of a Mono-tower platform is presented. The failure mode, fatigue failure in the butt welds, is investigated with two different models. The one with the fatigue strength expressed through SN relations, the other with the fatigue strength expressed thro...... of the natural period, damping ratio, current, stress spectrum and parameters describing the fatigue strength. Further, soil damping is shown to be significant for the Mono-tower.......In this paper, a fatigue reliability analysis of a Mono-tower platform is presented. The failure mode, fatigue failure in the butt welds, is investigated with two different models. The one with the fatigue strength expressed through SN relations, the other with the fatigue strength expressed...

  18. Analysis of Gumbel Model for Software Reliability Using Bayesian Paradigm

    Raj Kumar


    Full Text Available In this paper, we have illustrated the suitability of Gumbel Model for software reliability data. The model parameters are estimated using likelihood based inferential procedure: classical as well as Bayesian. The quasi Newton-Raphson algorithm is applied to obtain the maximum likelihood estimates and associated probability intervals. The Bayesian estimates of the parameters of Gumbel model are obtained using Markov Chain Monte Carlo(MCMC simulation method in OpenBUGS(established software for Bayesian analysis using Markov Chain Monte Carlo methods. The R functions are developed to study the statistical properties, model validation and comparison tools of the model and the output analysis of MCMC samples generated from OpenBUGS. Details of applying MCMC to parameter estimation for the Gumbel model are elaborated and a real software reliability data set is considered to illustrate the methods of inference discussed in this paper.

  19. Reliability analysis method applied in slope stability: slope prediction and forecast on stability analysis

    Wenjuan ZHANG; Li CHEN; Ning QU; Hai'an LIANG


    Landslide is one kind of geologic hazards that often happens all over the world. It brings huge losses to human life and property; therefore, it is very important to research it. This study focused in combination between single and regional landslide, traditional slope stability analysis method and reliability analysis method. Meanwhile, methods of prediction of slopes and reliability analysis were discussed.

  20. Reliability analysis based on the losses from failures.

    Todinov, M T


    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the

  1. A Sensitivity Analysis on Component Reliability from Fatigue Life Computations


    AD-A247 430 MTL TR 92-5 AD A SENSITIVITY ANALYSIS ON COMPONENT RELIABILITY FROM FATIGUE LIFE COMPUTATIONS DONALD M. NEAL, WILLIAM T. MATTHEWS, MARK G...HAGI OR GHANI NUMBI:H(s) Donald M. Neal, William T. Matthews, Mark G. Vangel, and Trevor Rudalevige 9. PERFORMING ORGANIZATION NAME AND ADDRESS lU...Technical Information Center, Cameron Station, Building 5, 5010 Duke Street, Alexandria, VA 22304-6145 2 ATTN: DTIC-FDAC I MIAC/ CINDAS , Purdue


    彭世济; 卢明银; 张达贤


    It is stipulated in the China national document, named"The Economical Appraisal Methods for Construction Projects" that dynamic analysis should dominate the project economical appraisal methods.This paper has set up a dynamic investment forecast model for Yuanbaoshan Surface Coal Mine. Based on this model, the investment reliability using simulation and analytic methods has been analysed, anti the probability that the designed internal rate of return can reach 8.4%, from economic points of view, have been also studied.

  3. Reliability analysis for new technology-based transmitters

    Brissaud, Florent, E-mail: florent.brissaud.2007@utt.f [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Barros, Anne; Berenguer, Christophe [Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Charpentier, Dominique [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France)


    The reliability analysis of new technology-based transmitters has to deal with specific issues: various interactions between both material elements and functions, undefined behaviours under faulty conditions, several transmitted data, and little reliability feedback. To handle these particularities, a '3-step' model is proposed, based on goal tree-success tree (GTST) approaches to represent both the functional and material aspects, and includes the faults and failures as a third part for supporting reliability analyses. The behavioural aspects are provided by relationship matrices, also denoted master logic diagrams (MLD), with stochastic values which represent direct relationships between system elements. Relationship analyses are then proposed to assess the effect of any fault or failure on any material element or function. Taking these relationships into account, the probabilities of malfunction and failure modes are evaluated according to time. Furthermore, uncertainty analyses tend to show that even if the input data and system behaviour are not well known, these previous results can be obtained in a relatively precise way. An illustration is provided by a case study on an infrared gas transmitter. These properties make the proposed model and corresponding reliability analyses especially suitable for intelligent transmitters (or 'smart sensors').

  4. Analysis and Reliability Performance Comparison of Different Facial Image Features

    J. Madhavan


    Full Text Available This study performs reliability analysis on the different facial features with weighted retrieval accuracy on increasing facial database images. There are many methods analyzed in the existing papers with constant facial databases mentioned in the literature review. There were not much work carried out to study the performance in terms of reliability and also how the method will perform on increasing the size of the database. In this study certain feature extraction methods were analyzed on the regular performance measure and also the performance measures are modified to fit the real time requirements by giving weight ages for the closer matches. In this study four facial feature extraction methods are performed, they are DWT with PCA, LWT with PCA, HMM with SVD and Gabor wavelet with HMM. Reliability of these methods are analyzed and reported. Among all these methods Gabor wavelet with HMM gives more reliability than other three methods performed. Experiments are carried out to evaluate the proposed approach on the Olivetti Research Laboratory (ORL face database.




    RHIC has been successfully operated for 5 years as a collider for different species, ranging from heavy ions including gold and copper, to polarized protons. We present a critical analysis of reliability data for RHIC that not only identifies the principal factors limiting availability but also evaluates critical choices at design times and assess their impact on present machine performance. RHIC availability data are typical when compared to similar high-energy colliders. The critical analysis of operations data is the basis for studies and plans to improve RHIC machine availability beyond the 50-60% typical of high-energy colliders.

  6. Using functional analysis diagrams to improve product reliability and cost

    Ioannis Michalakoudis


    Full Text Available Failure mode and effects analysis and value engineering are well-established methods in the manufacturing industry, commonly applied to optimize product reliability and cost, respectively. Both processes, however, require cross-functional teams to identify and evaluate the product/process functions and are resource-intensive, hence their application is mostly limited to large organizations. In this article, we present a methodology involving the concurrent execution of failure mode and effects analysis and value engineering, assisted by a set of hierarchical functional analysis diagram models, along with the outcomes of a pilot application in a UK-based manufacturing small and medium enterprise. Analysis of the results indicates that this new approach could significantly enhance the resource efficiency and effectiveness of both failure mode and effects analysis and value engineering processes.

  7. Core Competence Analysis--Toyota Production System



      Core competencies are the wel spring of new business development. It is the sharpest sword to penetrate the mature market, hold and enlarge the existing share. Toyota makes wel use of its TPS and form its own style which other car manufacturers hard to imitate.In contrast,the Chinese company---FAW only imitating the superficial aspects from Toyota and ignoring its own problems in manufacture line.

  8. Mutation Analysis Approach to Develop Reliable Object-Oriented Software

    Monalisa Sarma


    Full Text Available In general, modern programs are large and complex and it is essential that they should be highly reliable in applications. In order to develop highly reliable software, Java programming language developer provides a rich set of exceptions and exception handling mechanisms. Exception handling mechanisms are intended to help developers build robust programs. Given a program with exception handling constructs, for an effective testing, we are to detect whether all possible exceptions are raised and caught or not. However, complex exception handling constructs make it tedious to trace which exceptions are handled and where and which exceptions are passed on. In this paper, we address this problem and propose a mutation analysis approach to develop reliable object-oriented programs. We have applied a number of mutation operators to create a large set of mutant programs with different type of faults. We then generate test cases and test data to uncover exception related faults. The test suite so obtained is applied to the mutant programs measuring the mutation score and hence verifying whether mutant programs are effective or not. We have tested our approach with a number of case studies to substantiate the efficacy of the proposed mutation analysis technique.

  9. Strength Reliability Analysis of Stiffened Cylindrical Shells Considering Failure Correlation

    Xu Bai; Liping Sun; Wei Qin; Yongkun Lv


    The stiffened cylindrical shell is commonly used for the pressure hull of submersibles and the legs of offshore platforms. There are various failure modes because of uncertainty with the structural size and material properties, uncertainty of the calculation model and machining errors. Correlations among failure modes must be considered with the structural reliability of stiffened cylindrical shells. However, the traditional method cannot consider the correlations effectively. The aim of this study is to present a method of reliability analysis for stiffened cylindrical shells which considers the correlations among failure modes. Firstly, the joint failure probability calculation formula of two related failure modes is derived through use of the 2D joint probability density function. Secondly, the full probability formula of the tandem structural system is given with consideration to the correlations among failure modes. At last, the accuracy of the system reliability calculation is verified through use of the Monte Carlo simulation. Result of the analysis shows the failure probability of stiffened cylindrical shells can be gained through adding the failure probability of each mode.

  10. Reliability Analysis of Penetration Systems Using Nondeterministic Methods



    Device penetration into media such as metal and soil is an application of some engineering interest. Often, these devices contain internal components and it is of paramount importance that all significant components survive the severe environment that accompanies the penetration event. In addition, the system must be robust to perturbations in its operating environment, some of which exhibit behavior which can only be quantified to within some level of uncertainty. In the analysis discussed herein, methods to address the reliability of internal components for a specific application system are discussed. The shock response spectrum (SRS) is utilized in conjunction with the Advanced Mean Value (AMV) and Response Surface methods to make probabilistic statements regarding the predicted reliability of internal components. Monte Carlo simulation methods are also explored.

  11. Analytical reliability analysis of soil-water characteristic curve

    Johari A.


    Full Text Available The Soil Water Characteristic Curve (SWCC, also known as the soil water-retention curve, is an important part of any constitutive relationship for unsaturated soils. Deterministic assessment of SWCC has received considerable attention in the past few years. However the uncertainties of the parameters which affect SWCC dictate that the problem is of a probabilistic nature rather than being deterministic. In this research, a Gene Expression Programming (GEP-based SWCC model is employed to assess the reliability of SWCC. For this purpose, the Jointly Distributed Random Variables (JDRV method is used as an analytical method for reliability analysis. All input parameters of the model which are initial void ratio, initial water content, silt and clay contents are set to be stochastic and modelled using truncated normal probability density functions. The results are compared with those of the Monte Carlo (MC simulation. It is shown that the initial water content is the most effective parameter in SWCC.

  12. Optimization Based Efficiencies in First Order Reliability Analysis

    Peck, Jeffrey A.; Mahadevan, Sankaran


    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  13. Issues in benchmarking human reliability analysis methods : a literature review.

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)


    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.


    Clark, J. S.


    One of the most important factors in the development of nuclear rocket engine designs is to be able to accurately predict temperatures and pressures throughout a fission nuclear reactor core with axial hydrogen flow through circular coolant passages. CAC is an analytical prediction program to study the heat transfer and fluid flow characteristics of a circular coolant passage. CAC predicts as a function of time axial and radial fluid conditions, passage wall temperatures, flow rates in each coolant passage, and approximate maximum material temperatures. CAC incorporates the hydrogen properties model STATE to provide fluid-state relations, thermodynamic properties, and transport properties of molecular hydrogen in any fixed ortho-para combination. The program requires the general core geometry, the core material properties as a function of temperature, the core power profile, and the core inlet conditions as function of time. Although CAC was originally developed in FORTRAN IV for use on an IBM 7094, this version is written in ANSI standard FORTRAN 77 and is designed to be machine independent. It has been successfully compiled on IBM PC series and compatible computers running MS-DOS with Lahey F77L, a Sun4 series computer running SunOS 4.1.1, and a VAX series computer running VMS 5.4-3. CAC requires 300K of RAM under MS-DOS, 422K of RAM under SunOS, and 220K of RAM under VMS. No sample executable is provided on the distribution medium. Sample input and output data are included. The standard distribution medium for this program is a 5.25 inch 360K MS-DOS format diskette. CAC was developed in 1966, and this machine independent version was released in 1992. IBM-PC and IBM are registered trademarks of International Business Machines. Lahey F77L is a registered trademark of Lahey Computer Systems, Inc. SunOS is a trademark of Sun Microsystems, Inc. VMS is a trademark of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.


    Susan S. Sorini; John F. Schabron; Joseph F. Rovani Jr


    Soil sampling and storage practices for volatile organic analysis must be designed to minimize loss of volatile organic compounds (VOCs) from samples. The En Core{reg_sign} sampler is designed to collect and store soil samples in a manner that minimizes loss of contaminants due to volatilization and/or biodegradation. An ASTM International (ASTM) standard practice, D 6418, Standard Practice for Using the Disposable En Core Sampler for Sampling and Storing Soil for Volatile Organic Analysis, describes use of the En Core sampler to collect and store a soil sample of approximately 5 grams or 25 grams for volatile organic analysis and specifies sample storage in the En Core sampler at 4 {+-} 2 C for up to 48 hours; -7 to -21 C for up to 14 days; or 4 {+-} 2 C for up to 48 hours followed by storage at -7 to -21 C for up to five days. This report discusses activities performed during the past year to promote and continue acceptance of the En Core samplers based on their performance to store soil samples for VOC analysis. The En Core sampler is designed to collect soil samples for VOC analysis at the soil surface. To date, a sampling tool for collecting and storing subsurface soil samples for VOC analysis is not available. Development of a subsurface VOC sampling/storage device was initiated in 1999. This device, which is called the Accu Core{trademark} sampler, is designed so that a soil sample can be collected below the surface using a dual-tube penetrometer and transported to the laboratory for analysis in the same container. Laboratory testing of the current Accu Core design shows that the device holds low-level concentrations of VOCs in soil samples during 48-hour storage at 4 {+-} 2 C and that the device is ready for field evaluation to generate additional performance data. This report discusses a field validation exercise that was attempted in Pennsylvania in 2004 and activities being performed to plan and conduct a field validation study in 2006. A draft ASTM


    Hong-Zhong Huang


    Full Text Available Engineering design under uncertainty has gained considerable attention in recent years. A great multitude of new design optimization methodologies and reliability analysis approaches are put forth with the aim of accommodating various uncertainties. Uncertainties in practical engineering applications are commonly classified into two categories, i.e., aleatory uncertainty and epistemic uncertainty. Aleatory uncertainty arises because of unpredictable variation in the performance and processes of systems, it is irreducible even adding more data or knowledge. On the other hand, epistemic uncertainty stems from lack of knowledge of the system due to limited data, measurement limitations, or simplified approximations in modeling system behavior and it can be reduced by obtaining more data or knowledge. More specifically, aleatory uncertainty is naturally represented by a statistical distribution and its associated parameters can be characterized by sufficient data. If, however, the data is limited and can be quantified in a statistical sense, epistemic uncertainty can be considered as an alternative tool in such a situation. Of the several optional treatments for epistemic uncertainty, possibility theory and evidence theory have proved to be the most computationally efficient and stable for reliability analysis and engineering design optimization. This study first attempts to provide a better understanding of uncertainty in engineering design by giving a comprehensive overview of its classifications, theories and design considerations. Then a review is conducted of general topics such as the foundations and applications of possibility theory and evidence theory. This overview includes the most recent results from theoretical research, computational developments and performance improvement of possibility theory and evidence theory with an emphasis on revealing the capability and characteristics of quantifying uncertainty from different perspectives

  17. Reliability and risk analysis data base development: an historical perspective

    Fragola, Joseph R


    Collection of empirical data and data base development for use in the prediction of the probability of future events has a long history. Dating back at least to the 17th century, safe passage events and mortality events were collected and analyzed to uncover prospective underlying classes and associated class attributes. Tabulations of these developed classes and associated attributes formed the underwriting basis for the fledgling insurance industry. Much earlier, master masons and architects used design rules of thumb to capture the experience of the ages and thereby produce structures of incredible longevity and reliability (Antona, E., Fragola, J. and Galvagni, R. Risk based decision analysis in design. Fourth SRA Europe Conference Proceedings, Rome, Italy, 18-20 October 1993). These rules served so well in producing robust designs that it was not until almost the 19th century that the analysis (Charlton, T.M., A History Of Theory Of Structures In The 19th Century, Cambridge University Press, Cambridge, UK, 1982) of masonry voussoir arches, begun by Galileo some two centuries earlier (Galilei, G. Discorsi e dimostrazioni mathematiche intorno a due nuove science, (Discourses and mathematical demonstrations concerning two new sciences, Leiden, The Netherlands, 1638), was placed on a sound scientific basis. Still, with the introduction of new materials (such as wrought iron and steel) and the lack of theoretical knowledge and computational facilities, approximate methods of structural design abounded well into the second half of the 20th century. To this day structural designers account for material variations and gaps in theoretical knowledge by employing factors of safety (Benvenuto, E., An Introduction to the History of Structural Mechanics, Part II: Vaulted Structures and Elastic Systems, Springer-Verlag, NY, 1991) or codes of practice (ASME Boiler and Pressure Vessel Code, ASME, New York) originally developed in the 19th century (Antona, E., Fragola, J. and


    Dustin Lawrence


    Full Text Available The purpose of this study was to inform decision makers at state and local levels, as well as property owners about the amount of water that can be supplied by rainwater harvesting systems in Texas so that it may be included in any future planning. Reliability of a rainwater tank is important because people want to know that a source of water can be depended on. Performance analyses were conducted on rainwater harvesting tanks for three Texas cities under different rainfall conditions and multiple scenarios to demonstrate the importance of optimizing rainwater tank design. Reliability curves were produced and reflect the percentage of days in a year that water can be supplied by a tank. Operational thresholds were reached in all scenarios and mark the point at which reliability increases by only 2% or less with an increase in tank size. A payback period analysis was conducted on tank sizes to estimate the amount of time it would take to recoup the cost of installing a rainwater harvesting system.

  19. A Bayesian Framework for Reliability Analysis of Spacecraft Deployments

    Evans, John W.; Gallo, Luis; Kaminsky, Mark


    Deployable subsystems are essential to mission success of most spacecraft. These subsystems enable critical functions including power, communications and thermal control. The loss of any of these functions will generally result in loss of the mission. These subsystems and their components often consist of unique designs and applications for which various standardized data sources are not applicable for estimating reliability and for assessing risks. In this study, a two stage sequential Bayesian framework for reliability estimation of spacecraft deployment was developed for this purpose. This process was then applied to the James Webb Space Telescope (JWST) Sunshield subsystem, a unique design intended for thermal control of the Optical Telescope Element. Initially, detailed studies of NASA deployment history, "heritage information", were conducted, extending over 45 years of spacecraft launches. This information was then coupled to a non-informative prior and a binomial likelihood function to create a posterior distribution for deployments of various subsystems uSing Monte Carlo Markov Chain sampling. Select distributions were then coupled to a subsequent analysis, using test data and anomaly occurrences on successive ground test deployments of scale model test articles of JWST hardware, to update the NASA heritage data. This allowed for a realistic prediction for the reliability of the complex Sunshield deployment, with credibility limits, within this two stage Bayesian framework.

  20. A Research Roadmap for Computation-Based Human Reliability Analysis

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  1. Reliability and risk analysis using artificial neural networks

    Robinson, D.G. [Sandia National Labs., Albuquerque, NM (United States)


    This paper discusses preliminary research at Sandia National Laboratories into the application of artificial neural networks for reliability and risk analysis. The goal of this effort is to develop a reliability based methodology that captures the complex relationship between uncertainty in material properties and manufacturing processes and the resulting uncertainty in life prediction estimates. The inputs to the neural network model are probability density functions describing system characteristics and the output is a statistical description of system performance. The most recent application of this methodology involves the comparison of various low-residue, lead-free soldering processes with the desire to minimize the associated waste streams with no reduction in product reliability. Model inputs include statistical descriptions of various material properties such as the coefficients of thermal expansion of solder and substrate. Consideration is also given to stochastic variation in the operational environment to which the electronic components might be exposed. Model output includes a probabilistic characterization of the fatigue life of the surface mounted component.

  2. Fifty Years of THERP and Human Reliability Analysis

    Ronald L. Boring


    In 1962 at a Human Factors Society symposium, Alan Swain presented a paper introducing a Technique for Human Error Rate Prediction (THERP). This was followed in 1963 by a Sandia Laboratories monograph outlining basic human error quantification using THERP and, in 1964, by a special journal edition of Human Factors on quantification of human performance. Throughout the 1960s, Swain and his colleagues focused on collecting human performance data for the Sandia Human Error Rate Bank (SHERB), primarily in connection with supporting the reliability of nuclear weapons assembly in the US. In 1969, Swain met with Jens Rasmussen of Risø National Laboratory and discussed the applicability of THERP to nuclear power applications. By 1975, in WASH-1400, Swain had articulated the use of THERP for nuclear power applications, and the approach was finalized in the watershed publication of the NUREG/CR-1278 in 1983. THERP is now 50 years old, and remains the most well known and most widely used HRA method. In this paper, the author discusses the history of THERP, based on published reports and personal communication and interviews with Swain. The author also outlines the significance of THERP. The foundations of human reliability analysis are found in THERP: human failure events, task analysis, performance shaping factors, human error probabilities, dependence, event trees, recovery, and pre- and post-initiating events were all introduced in THERP. While THERP is not without its detractors, and it is showing signs of its age in the face of newer technological applications, the longevity of THERP is a testament of its tremendous significance. THERP started the field of human reliability analysis. This paper concludes with a discussion of THERP in the context of newer methods, which can be seen as extensions of or departures from Swain’s pioneering work.

  3. Reliability and Robustness Analysis of the Masinga Dam under Uncertainty

    Hayden Postle-Floyd


    Full Text Available Kenya’s water abstraction must meet the projected growth in municipal and irrigation demand by the end of 2030 in order to achieve the country’s industrial and economic development plan. The Masinga dam, on the Tana River, is the key to meeting this goal to satisfy the growing demands whilst also continuing to provide hydroelectric power generation. This study quantitatively assesses the reliability and robustness of the Masinga dam system under uncertain future supply and demand using probabilistic climate and population projections, and examines how long-term planning may improve the longevity of the dam. River flow and demand projections are used alongside each other as inputs to the dam system simulation model linked to an optimisation engine to maximise water availability. Water availability after demand satisfaction is assessed for future years, and the projected reliability of the system is calculated for selected years. The analysis shows that maximising power generation on a short-term year-by-year basis achieves 80%, 50% and 1% reliability by 2020, 2025 and 2030 onwards, respectively. Longer term optimal planning, however, has increased system reliability to up to 95% in 2020, 80% in 2025, and more than 40% in 2030 onwards. In addition, increasing the capacity of the reservoir by around 25% can significantly improve the robustness of the system for all future time periods. This study provides a platform for analysing the implication of different planning and management of Masinga dam and suggests that careful consideration should be given to account for growing municipal needs and irrigation schemes in both the immediate and the associated Tana River basin.

  4. Human Performance Modeling for Dynamic Human Reliability Analysis

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory


    Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.

  5. Reliability Analysis of a Mono-Tower Platform

    Kirkegaard, Poul Henning; Enevoldsen, I.; Sørensen, John Dalsgaard;

    In this paper a reliability analysis of a Mono-tower platform is presented. The failure modes, considered, are yelding in the tube cross-sections, and fatigue failure in the butt welds. The fatigue failure mode is investigated with a fatigue model, where the fatigue strength is expressed through SN...... for the fatigue limit state is a significant failure mode for the Mono.tower platform. Further, it is shown for the fatigue failure mode the the largest contributions to the overall uncertainty are due to the damping ratio, the inertia coefficient, the stress concentration factor, the model uncertainties...

  6. Reliability Analysis of a Mono-Tower Platform

    Kirkegaard, Poul Henning; Enevoldsen, I.; Sørensen, John Dalsgaard;


    In this paper, a reliability analysis of a Mono-tower platform is presented. Te failure modes considered are yielding in the tube cross sections and fatigue failure in the butts welds. The fatigue failrue mode is investigated with a fatigue model, where the fatigue strength is expressed through SN...... that the fatigue limit state is a significant failure mode for the Mono-tower platform. Further, it is shown for the fatigue failure mode that the largest contributions to the overall uncertainty are due to the damping ratio, the inertia coefficient, the stress concentration factor, the model uncertainties...

  7. Fault Diagnosis and Reliability Analysis Using Fuzzy Logic Method

    Miao Zhinong; Xu Yang; Zhao Xiangyu


    A new fuzzy logic fault diagnosis method is proposed. In this method, fuzzy equations are employed to estimate the component state of a system based on the measured system performance and the relationship between component state and system performance which is called as "performance-parameter" knowledge base and constructed by expert. Compared with the traditional fault diagnosis method, this fuzzy logic method can use humans intuitive knowledge and dose not need a precise mapping between system performance and component state. Simulation proves its effectiveness in fault diagnosis. Then, the reliability analysis is performed based on the fuzzy logic method.


    G. W. Parry; J.A Forester; V.N. Dang; S. M. L. Hendrickson; M. Presley; E. Lois; J. Xing


    This paper describes a method, IDHEAS (Integrated Decision-Tree Human Event Analysis System) that has been developed jointly by the US NRC and EPRI as an improved approach to Human Reliability Analysis (HRA) that is based on an understanding of the cognitive mechanisms and performance influencing factors (PIFs) that affect operator responses. The paper describes the various elements of the method, namely the performance of a detailed cognitive task analysis that is documented in a crew response tree (CRT), and the development of the associated time-line to identify the critical tasks, i.e. those whose failure results in a human failure event (HFE), and an approach to quantification that is based on explanations of why the HFE might occur.

  9. Integration of human reliability analysis into the high consequence process

    Houghton, F.K.; Morzinski, J.


    When performing a hazards analysis (HA) for a high consequence process, human error often plays a significant role in the hazards analysis. In order to integrate human error into the hazards analysis, a human reliability analysis (HRA) is performed. Human reliability is the probability that a person will correctly perform a system-required activity in a required time period and will perform no extraneous activity that will affect the correct performance. Even though human error is a very complex subject that can only approximately be addressed in risk assessment, an attempt must be made to estimate the effect of human errors. The HRA provides data that can be incorporated in the hazard analysis event. This paper will discuss the integration of HRA into a HA for the disassembly of a high explosive component. The process was designed to use a retaining fixture to hold the high explosive in place during a rotation of the component. This tool was designed as a redundant safety feature to help prevent a drop of the explosive. This paper will use the retaining fixture to demonstrate the following HRA methodology`s phases. The first phase is to perform a task analysis. The second phase is the identification of the potential human, both cognitive and psychomotor, functions performed by the worker. During the last phase the human errors are quantified. In reality, the HRA process is an iterative process in which the stages overlap and information gathered in one stage may be used to refine a previous stage. The rationale for the decision to use or not use the retaining fixture and the role the HRA played in the decision will be discussed.

  10. Continuous flow analysis of labile iron in ice-cores.

    Hiscock, William T; Fischer, Hubertus; Bigler, Matthias; Gfeller, Gideon; Leuenberger, Daiana; Mini, Olivia


    The important active and passive role of mineral dust aerosol in the climate and the global carbon cycle over the last glacial/interglacial cycles has been recognized. However, little data on the most important aeolian dust-derived biological micronutrient, iron (Fe), has so far been available from ice-cores from Greenland or Antarctica. Furthermore, Fe deposition reconstructions derived from the palaeoproxies particulate dust and calcium differ significantly from the Fe flux data available. The ability to measure high temporal resolution Fe data in polar ice-cores is crucial for the study of the timing and magnitude of relationships between geochemical events and biological responses in the open ocean. This work adapts an existing flow injection analysis (FIA) methodology for low-level trace Fe determinations with an existing glaciochemical analysis system, continuous flow analysis (CFA) of ice-cores. Fe-induced oxidation of N,N'-dimethyl-p-pheylenediamine (DPD) is used to quantify the biologically more important and easily leachable Fe fraction released in a controlled digestion step at pH ~1.0. The developed method was successfully applied to the determination of labile Fe in ice-core samples collected from the Antarctic Byrd ice-core and the Greenland Ice-Core Project (GRIP) ice-core.

  11. Tailoring a Human Reliability Analysis to Your Industry Needs

    DeMott, D. L.


    Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed

  12. Fatigue Reliability Analysis of Wind Turbine Cast Components

    Hesam Mirzaei Rafsanjani


    Full Text Available The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test facilities can affect the verification of properties by testing. Hence, it is important to have a tool to identify which foundry, cutting and/or test facility produces components which, based on the relevant uncertainties, have the largest expected fatigue life or, alternatively, have the largest reliability to be used for decision-making if additional cost considerations are added. In this paper, a statistical approach is presented based on statistical hypothesis testing and analysis of covariance (ANCOVA which can be applied to compare different groups (manufacturers, suppliers, test facilities, etc. and to quantify the relevant uncertainties using available fatigue tests. Illustrative results are presented as obtained by statistical analysis of a large set of fatigue data for casted test components typically used for wind turbines. Furthermore, the SN curves (fatigue life curves based on applied stress for fatigue assessment are estimated based on the statistical analyses and by introduction of physical, model and statistical uncertainties used for the illustration of reliability assessment.

  13. Inclusion of fatigue effects in human reliability analysis

    Griffith, Candice D. [Vanderbilt University, Nashville, TN (United States); Mahadevan, Sankaran, E-mail: [Vanderbilt University, Nashville, TN (United States)


    The effect of fatigue on human performance has been observed to be an important factor in many industrial accidents. However, defining and measuring fatigue is not easily accomplished. This creates difficulties in including fatigue effects in probabilistic risk assessments (PRA) of complex engineering systems that seek to include human reliability analysis (HRA). Thus the objectives of this paper are to discuss (1) the importance of the effects of fatigue on performance, (2) the difficulties associated with defining and measuring fatigue, (3) the current status of inclusion of fatigue in HRA methods, and (4) the future directions and challenges for the inclusion of fatigue, specifically sleep deprivation, in HRA. - Highlights: >We highlight the need for fatigue and sleep deprivation effects on performance to be included in human reliability analysis (HRA) methods. Current methods do not explicitly include sleep deprivation effects. > We discuss the difficulties in defining and measuring fatigue. > We review sleep deprivation research, and discuss the limitations and future needs of the current HRA methods.

  14. Markovian reliability analysis under uncertainty with an application on the shutdown system of the Clinch River Breeder Reactor

    Papazoglou, I A; Gyftopoulos, E P


    A methodology for the assessment of the uncertainties about the reliability of nuclear reactor systems described by Markov models is developed, and the uncertainties about the probability of loss of coolable core geometry (LCG) of the Clinch River Breeder Reactor (CRBR) due to shutdown system failures, are assessed. Uncertainties are expressed by assuming the failure rates, the repair rates and all other input variables of reliability analysis as random variables, distributed according to known probability density functions (pdf). The pdf of the reliability is then calculated by the moment matching technique. Two methods have been employed for the determination of the moments of the reliability: the Monte Carlo simulation; and the Taylor-series expansion. These methods are adopted to Markovian problems and compared for accuracy and efficiency.

  15. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Ronald L. Boring


    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  16. Transient Reliability Analysis Capability Developed for CARES/Life

    Nemeth, Noel N.


    The CARES/Life software developed at the NASA Glenn Research Center provides a general-purpose design tool that predicts the probability of the failure of a ceramic component as a function of its time in service. This award-winning software has been widely used by U.S. industry to establish the reliability and life of a brittle material (e.g., ceramic, intermetallic, and graphite) structures in a wide variety of 21st century applications.Present capabilities of the NASA CARES/Life code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code can compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth failure conditions CARES/Life can handle sustained and linearly increasing time-dependent loads, whereas in cyclic fatigue applications various types of repetitive constant-amplitude loads can be accounted for. However, in real applications applied loads are rarely that simple but vary with time in more complex ways such as engine startup, shutdown, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. A methodology has now been developed to allow the CARES/Life computer code to perform reliability analysis of ceramic components undergoing transient thermal and mechanical loading. This means that CARES/Life will be able to analyze finite element models of ceramic components that simulate dynamic engine operating conditions. The methodology developed is generalized to account for material property variation (on strength distribution and fatigue) as a function of temperature. This allows CARES/Life to analyze components undergoing rapid temperature change in other words, components undergoing thermal shock. In addition, the capability has

  17. Application Analysis of Strengthened Story in Frame-Core Structures

    SU Yuan; CHEN Chuan-yao; LI Li


    Lateral deflection formulas are presented for analysis of the strengthened story applied to frame-core structures. For the frame-core structures with top outriggers and with middle outriggers, the relationship between stiffness characteristic parameters of frame and outriggers and the top drift of structures under different loads is analyzed. It is indicated that when stiffness characteristic parameter of frame is large, outrigger efficiency for top drift reduction is low, and the mutation of internal forces occurs; when the stiffness characteristic parameter of frame is less than 3, installing the strengthened story is advantageous to frame-core structures.

  18. TMI-2 accident: core heat-up analysis

    Ardron, K.H.; Cain, D.G.


    This report summarizes NSAC study of reactor core thermal conditions during the accident at Three Mile Island, Unit 2. The study focuses primarily on the time period from core uncovery (approximately 113 minutes after turbine trip) through the initiation of sustained high pressure injection (after 202 minutes). The transient analysis is based upon established sequences of events; plant data; post-accident measurements; interpretation or indirect use of instrument responses to accident conditions.

  19. Thermal hydraulic analysis of the JMTR improved LEU-core

    Tabata, Toshio; Nagao, Yoshiharu; Komukai, Bunsaku; Naka, Michihiro; Fujiki, Kazuo [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment; Takeda, Takashi [Radioactive Waste Management and Nuclear Facility Decommissioning Technology Center, Tokai, Ibaraki (Japan)


    After the investigation of the new core arrangement for the JMTR reactor in order to enhance the fuel burn-up and consequently extend the operation period, the ''improved LEU core'' that utilized 2 additional fuel elements instead of formerly installed reflector elements, was adopted. This report describes the results of the thermal-hydraulic analysis of the improved LEU core as a part of safety analysis for the licensing. The analysis covers steady state, abnormal operational transients and accidents, which were described in the annexes of the licensing documents as design bases events. Calculation conditions for the computer codes were conservatively determined based on the neutronic analysis results and others. The results of the analysis, that revealed the safety criteria were satisfied on the fuel temperature, DNBR and primary coolant temperature, were used in the licensing. The operation license of the JMTR with the improved LEU core was granted in March 2001, and the reactor operation with new core started in November 2001 as 142nd operation cycle. (author)

  20. Productivity enhancement and reliability through AutoAnalysis

    Garetto, Anthony; Rademacher, Thomas; Schulz, Kristian


    The decreasing size and increasing complexity of photomask features, driven by the push to ever smaller technology nodes, places more and more challenges on the mask house, particularly in terms of yield management and cost reduction. Particularly challenging for mask shops is the inspection, repair and review cycle which requires more time and skill from operators due to the higher number of masks required per technology node and larger nuisance defect counts. While the measurement throughput of the AIMS™ platform has been improved in order to keep pace with these trends, the analysis of aerial images has seen little advancement and remains largely a manual process. This manual analysis of aerial images is time consuming, dependent on the skill level of the operator and significantly contributes to the overall mask manufacturing process flow. AutoAnalysis, the first application available for the FAVOR® platform, offers a solution to these problems by providing fully automated analysis of AIMS™ aerial images. Direct communication with the AIMS™ system allows automated data transfer and analysis parallel to the measurements. User defined report templates allow the relevant data to be output in a manner that can be tailored to various internal needs and support the requests of your customers. Productivity is significantly improved due to the fast analysis, operator time is saved and made available for other tasks and reliability is no longer a concern as the most defective region is always and consistently captured. In this paper the concept and approach of AutoAnalysis will be presented as well as an update to the status of the project. The benefits arising from the use of AutoAnalysis will be discussed in more detail and a study will be performed in order to demonstrate.

  1. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    Clayson, Peter E; Miller, Gregory A


    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue.

  2. Reliability analysis and updating of deteriorating systems with subset simulation

    Schneider, Ronald; Thöns, Sebastian; Straub, Daniel


    Bayesian updating of the system deterioration model. The updated system reliability is then obtained through coupling the updated deterioration model with a probabilistic structural model. The underlying high-dimensional structural reliability problems are solved using subset simulation, which...

  3. Suitability Analysis of Continuous-Use Reliability Growth Projection Models


    exists for all types, shapes, and sizes. The primary focus of this study is a comparison of reliability growth projection models designed for...requirements to use reliability growth models, recent studies have noted trends in reliability failures throughout the DoD. In [14] Dr. Michael a strict exponential distribu- tion was used to stay within their assumptions. In reality, however, reliability growth models often must be used

  4. New Mathematical Derivations Applicable to Safety and Reliability Analysis

    Cooper, J.A.; Ferson, S.


    Boolean logic expressions are often derived in safety and reliability analysis. Since the values of the operands are rarely exact, accounting for uncertainty with the tightest justifiable bounds is important. Accurate determination of result bounds is difficult when the inputs have constraints. One example of a constraint is that an uncertain variable that appears multiple times in a Boolean expression must always have the same value, although the value cannot be exactly specified. A solution for this repeated variable problem is demonstrated for two Boolean classes. The classes, termed functions with unate variables (including, but not limited to unate functions), and exclusive-or functions, frequently appear in Boolean equations for uncertain outcomes portrayed by logic trees (event trees and fault trees).

  5. Applicability of simplified human reliability analysis methods for severe accidents

    Boring, R.; St Germain, S. [Idaho National Lab., Idaho Falls, Idaho (United States); Banaseanu, G.; Chatri, H.; Akl, Y. [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)


    Most contemporary human reliability analysis (HRA) methods were created to analyse design-basis accidents at nuclear power plants. As part of a comprehensive expansion of risk assessments at many plants internationally, HRAs will begin considering severe accident scenarios. Severe accidents, while extremely rare, constitute high consequence events that significantly challenge successful operations and recovery. Challenges during severe accidents include degraded and hazardous operating conditions at the plant, the shift in control from the main control room to the technical support center, the unavailability of plant instrumentation, and the need to use different types of operating procedures. Such shifts in operations may also test key assumptions in existing HRA methods. This paper discusses key differences between design basis and severe accidents, reviews efforts to date to create customized HRA methods suitable for severe accidents, and recommends practices for adapting existing HRA methods that are already being used for HRAs at the plants. (author)

  6. Time-dependent reliability analysis and condition assessment of structures

    Ellingwood, B.R. [Johns Hopkins Univ., Baltimore, MD (United States)


    Structures generally play a passive role in assurance of safety in nuclear plant operation, but are important if the plant is to withstand the effect of extreme environmental or abnormal events. Relative to mechanical and electrical components, structural systems and components would be difficult and costly to replace. While the performance of steel or reinforced concrete structures in service generally has been very good, their strengths may deteriorate during an extended service life as a result of changes brought on by an aggressive environment, excessive loading, or accidental loading. Quantitative tools for condition assessment of aging structures can be developed using time-dependent structural reliability analysis methods. Such methods provide a framework for addressing the uncertainties attendant to aging in the decision process.

  7. Reliability analysis for the quench detection in the LHC machine

    Denz, R; Vergara-Fernández, A


    The Large Hadron Collider (LHC) will incorporate a large amount of superconducting elements that require protection in case of a quench. Key elements in the quench protection system are the electronic quench detectors. Their reliability will have an important impact on the down time as well as on the operational cost of the collider. The expected rates of both false and missed quenches have been computed for several redundant detection schemes. The developed model takes account of the maintainability of the system to optimise the frequency of foreseen checks, and evaluate their influence on the performance of different detection topologies. Seen the uncertainty of the failure rate of the components combined with the LHC tunnel environment, the study has been completed with a sensitivity analysis of the results. The chosen detection scheme and the maintainability strategy for each detector family are given.

  8. Petrographic Analysis of Cores from Plant 42


    27  DISCLAIMER: The contents of this report are not to be used for advertising , publication...Department of the Army position unless so designated by other authorized documents. DESTROY THIS REPORT WHEN NO LONGER NEEDED. DO NOT RETURN IT TO THE... graphic analysis was polished using diamond incrusted polishing pads. The polished sample was imaged using a Zeiss Stereo Discovery V20 mi- croscope

  9. A reliability analysis of the revised competitiveness index.

    Harris, Paul B; Houston, John M


    This study examined the reliability of the Revised Competitiveness Index by investigating the test-retest reliability, interitem reliability, and factor structure of the measure based on a sample of 280 undergraduates (200 women, 80 men) ranging in age from 18 to 28 years (M = 20.1, SD = 2.1). The findings indicate that the Revised Competitiveness Index has high test-retest reliability, high inter-item reliability, and a stable factor structure. The results support the assertion that the Revised Competitiveness Index assesses competitiveness as a stable trait rather than a dynamic state.

  10. Weibull analysis and flexural strength of hot-pressed core and veneered ceramic structures.

    Bona, Alvaro Della; Anusavice, Kenneth J; DeHoff, Paul H


    To test the hypothesis that the Weibull moduli of single- and multilayer ceramics are controlled primarily by the structural reliability of the core ceramic.Methods. Seven groups of 20 bar specimens (25 x 4 x 1.2 mm) were made from the following materials: (1) IPS Empress--a hot-pressed (HP) leucite-based core ceramic; (2) IPS Empress2--a HP lithia-based core ceramic; (3 and 7) Evision--a HP lithia-based core ceramic (ES); (4) IPS Empress2 body--a glass veneer; (5) ES (1.1 mm thick) plus a glaze layer (0.1 mm); and (6) ES (0.8 mm thick) plus veneer (0.3 mm) and glaze (0.1 mm). Each specimen was subjected to four-point flexure loading at a cross-head speed of 0.5 mm/min while immersed in distilled water at 37 degrees C, except for Group 7 that was tested in a dry environment. Failure loads were recorded and the fracture surfaces were examined using SEM. ANOVA and Duncan's multiple range test were used for statistical analysis. No significant differences were found between the mean flexural strength values of Groups 2, 3, 5, and 6 or between Groups 1 and 4 (p>0.05). However, significant differences were found for dry (Group 7) and wet (Groups 1-6) conditions. Glazing had no significant effect on the flexural strength or Weibull modulus. The strength and Weibull modulus of the ES ceramic were similar to those of Groups 5 and 6. The structural reliability of veneered core ceramic is controlled primarily by that of the core ceramic.

  11. Core Handling and Real-Time Non-Destructive Characterization at the Kochi Core Center: An Example of Core Analysis from the Chelungpu Fault

    W. Lin


    Full Text Available As an example of core analysis carried out inactive fault drilling programs, we report the procedures of core handling on the drilling site and non-destructive characterization in the laboratory. This analysis was employed onthe core samples taken from HoleBof the Taiwan Chelungpu-fault Drilling Project (TCDP, which penetrated through the active fault that slipped during the 1999 Chi-Chi, Taiwan earthquake. We show results of the non-destructive physical property measurements carried out at the Kochi Core Center (KCC, Japan. Distinct anomalies of lower bulk density and higher magnetic susceptibilitywere recognized in all three fault zones encountered in HoleB. To keep the core samples in good condition before they are used for variousanalyses is crucial. In addition, careful planning for core handlingand core analyses is necessary for successfulinvestigations. doi:10.2204/

  12. Comparative analysis of the core inflation for Russia

    A. K. Sapova


    Full Text Available Consumer price index is a measure of inflation and it consists of two parts: persistent component (trend inflation and short-term shocks. Inflation targeting requires index of core inflation, that independent from shortterm shocks and demonstrates the changes of the trend inflation. Reserve Banks pay attention on the changes in the trend inflation, when they take decisions about monetary policy, because it’s more informative than consumer price index for estimation of medium-term inflation risks. The objective of this article is detecting the index of core inflation that could be appropriate for monetary policy. There are some different measures of core inflation based on practice of Reserve Banks from different countries and economic articles. The comparative analysis presented in this article is based on several types of tests. The result of the research is that core consumer price index that is used today has got both advantages and weaknesses. Moreover, there is index of core inflation based on new methodology that is better than core consumer price index of Federal Sate Statistics Service. It is concluded that the Central Bank should focus precisely on this indicator when it takes decisions about monetary policy.

  13. Liquefaction of Tangier soils by using physically based reliability analysis modelling

    Dubujet P.


    Full Text Available Approaches that are widely used to characterize propensity of soils to liquefaction are mainly of empirical type. The potential of liquefaction is assessed by using correlation formulas that are based on field tests such as the standard and the cone penetration tests. These correlations depend however on the site where they were derived. In order to adapt them to other sites where seismic case histories are not available, further investigation is required. In this work, a rigorous one-dimensional modelling of the soil dynamics yielding liquefaction phenomenon is considered. Field tests consisting of core sampling and cone penetration testing were performed. They provided the necessary data for numerical simulations performed by using DeepSoil software package. Using reliability analysis, the probability of liquefaction was estimated and the obtained results were used to adapt Juang method to the particular case of sandy soils located in Tangier.

  14. Failure Analysis towards Reliable Performance of Aero-Engines

    T. Jayakumar


    Full Text Available Aero-engines are critical components whose reliable performance decides the primary safety of anaircrafthelicopter. This is met by rigorous maintenance schedule with periodic inspection/nondestructive testingof various engine components. In spite of these measures, failure of areo-engines do occur rather frequentlyin comparison to failure of other components. Systematic failure analysis helps one to identify root causeof the failure, thus enabling remedial measures to prevent recurrence of such failures. Turbine blades madeof nickel or cobalt-based alloys are used in aero-engines. These blades are subjected to complex loadingconditions at elevated temperatures. The main causes of failure of blades are attributed to creep, thermalfatigue and hot corrosion. Premature failure of blades in the combustion zone was reported in one of theaero-engines. The engine had both the compressor and the free-turbine in a common shaft. Detailedfailure analysis revealed the presence of creep voids in the blades that failed. Failure of turbine bladeswas also detected in another aero-engine operating in a coastal environment. In this failure, the protectivecoating on the blades was cracked at many locations. Grain boundary spikes were observed on these locations.The primary cause of this failure was the hot corrosion followed by creep damage

  15. Multi-Unit Considerations for Human Reliability Analysis

    St. Germain, S.; Boring, R.; Banaseanu, G.; Akl, Y.; Chatri, H.


    This paper uses the insights from the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) methodology to help identify human actions currently modeled in the single unit PSA that may need to be modified to account for additional challenges imposed by a multi-unit accident as well as identify possible new human actions that might be modeled to more accurately characterize multi-unit risk. In identifying these potential human action impacts, the use of the SPAR-H strategy to include both errors in diagnosis and errors in action is considered as well as identifying characteristics of a multi-unit accident scenario that may impact the selection of the performance shaping factors (PSFs) used in SPAR-H. The lessons learned from the Fukushima Daiichi reactor accident will be addressed to further help identify areas where improved modeling may be required. While these multi-unit impacts may require modifications to a Level 1 PSA model, it is expected to have much more importance for Level 2 modeling. There is little currently written specifically about multi-unit HRA issues. A review of related published research will be presented. While this paper cannot answer all issues related to multi-unit HRA, it will hopefully serve as a starting point to generate discussion and spark additional ideas towards the proper treatment of HRA in a multi-unit PSA.

  16. Is the simple auger coring method reliable for below-ground standing biomass estimation in Eucalyptus forest plantations?

    Levillain, Joseph; Thongo M'Bou, Armel; Deleporte, Philippe; Saint-André, Laurent; Jourdan, Christophe


    Despite their importance for plant production, estimations of below-ground biomass and its distribution in the soil are still difficult and time consuming, and no single reliable methodology is available for different root types. To identify the best method for root biomass estimations, four different methods, with labour requirements, were tested at the same location. The four methods, applied in a 6-year-old Eucalyptus plantation in Congo, were based on different soil sampling volumes: auger (8 cm in diameter), monolith (25 × 25 cm quadrate), half Voronoi trench (1·5 m(3)) and a full Voronoi trench (3 m(3)), chosen as the reference method. With the reference method (0-1m deep), fine-root biomass (FRB, diameter biomass (MRB diameter 2-10 mm) at 2·0 t ha(-1), coarse-root biomass (CRB, diameter >10 mm) at 5·6 t ha(-1) and stump biomass at 6·8 t ha(-1). Total below-ground biomass was estimated at 16·2 t ha(-1) (root : shoot ratio equal to 0·23) for this 800 tree ha(-1) eucalypt plantation density. The density of FRB was very high (0·56 t ha(-1)) in the top soil horizon (0-3 cm layer) and decreased greatly (0·3 t ha(-1)) with depth (50-100 cm). Without labour requirement considerations, no significant differences were found between the four methods for FRB and MRB; however, CRB was better estimated by the half and full Voronoi trenches. When labour requirements were considered, the most effective method was auger coring for FRB, whereas the half and full Voronoi trenches were the most appropriate methods for MRB and CRB, respectively. As CRB combined with stumps amounted to 78 % of total below-ground biomass, a full Voronoi trench is strongly recommended when estimating total standing root biomass. Conversely, for FRB estimation, auger coring is recommended with a design pattern accounting for the spatial variability of fine-root distribution.

  17. Fuzzy Reliability Analysis of the Shaft of a Steam Turbine


    Field surveying shows that the failure of the steam turbine's coupling is due to fatigue that is caused by compound stress. Fuzzy mathematics was applied to get the membership function of the fatigue strength rule. A formula of fuzzy reliability of the coupling was derived and a theory of coupling's fuzzy reliability is set up. The calculating method of the fuzzy reliability is explained by an illustrative example.

  18. Reliability of videotaped observational gait analysis in patients with orthopedic impairments

    Brunnekreef, J.J.; Uden, C. van; Moorsel, S. van; Kooloos, J.G.M.


    BACKGROUND: In clinical practice, visual gait observation is often used to determine gait disorders and to evaluate treatment. Several reliability studies on observational gait analysis have been described in the literature and generally showed moderate reliability. However, patients with orthopedic

  19. Wind energy Computerized Maintenance Management System (CMMS) : data collection recommendations for reliability analysis.

    Peters, Valerie A.; Ogilvie, Alistair; Veers, Paul S.


    This report addresses the general data requirements for reliability analysis of fielded wind turbines and other wind plant equipment. The report provides a list of the data needed to support reliability and availability analysis, and gives specific recommendations for a Computerized Maintenance Management System (CMMS) to support automated analysis. This data collection recommendations report was written by Sandia National Laboratories to address the general data requirements for reliability analysis of fielded wind turbines. This report is intended to help the reader develop a basic understanding of what data are needed from a Computerized Maintenance Management System (CMMS) and other data systems, for reliability analysis. The report provides: (1) a list of the data needed to support reliability and availability analysis; and (2) specific recommendations for a CMMS to support automated analysis. Though written for reliability analysis of wind turbines, much of the information is applicable to a wider variety of equipment and a wider variety of analysis and reporting needs.

  20. Use of bone core biopsies for cytogenetic analysis

    Martin, P.; Rowley, J.D.; Baron, J.M.


    Cultures of bone core specimens have proved satisfactory for cytogenetic analysis in patients from whom it was impossible to obtain a bone marrow aspirate, or in whose peripheral blood dividing myeloid cells were absent or insufficient in number. The quality of the metaphase chromosome is adequate for banding studies.

  1. Stress analysis of portable safety platform (Core Sampler Truck)

    Ziada, H.H.


    This document provides the stress analysis and evaluation of the portable platform of the rotary mode core sampler truck No. 2 (RMCST {number_sign}2). The platform comprises railing, posts, deck, legs, and a portable ladder; it is restrained from lateral motion by means of two brackets added to the drill-head service platform.

  2. Reliable Classification of Geologic Surfaces Using Texture Analysis

    Foil, G.; Howarth, D.; Abbey, W. J.; Bekker, D. L.; Castano, R.; Thompson, D. R.; Wagstaff, K.


    Communication delays and bandwidth constraints are major obstacles for remote exploration spacecraft. Due to such restrictions, spacecraft could make use of onboard science data analysis to maximize scientific gain, through capabilities such as the generation of bandwidth-efficient representative maps of scenes, autonomous instrument targeting to exploit targets of opportunity between communications, and downlink prioritization to ensure fast delivery of tactically-important data. Of particular importance to remote exploration is the precision of such methods and their ability to reliably reproduce consistent results in novel environments. Spacecraft resources are highly oversubscribed, so any onboard data analysis must provide a high degree of confidence in its assessment. The TextureCam project is constructing a "smart camera" that can analyze surface images to autonomously identify scientifically interesting targets and direct narrow field-of-view instruments. The TextureCam instrument incorporates onboard scene interpretation and mapping to assist these autonomous science activities. Computer vision algorithms map scenes such as those encountered during rover traverses. The approach, based on a machine learning strategy, trains a statistical model to recognize different geologic surface types and then classifies every pixel in a new scene according to these categories. We describe three methods for increasing the precision of the TextureCam instrument. The first uses ancillary data to segment challenging scenes into smaller regions having homogeneous properties. These subproblems are individually easier to solve, preventing uncertainty in one region from contaminating those that can be confidently classified. The second involves a Bayesian approach that maximizes the likelihood of correct classifications by abstaining from ambiguous ones. We evaluate these two techniques on a set of images acquired during field expeditions in the Mojave Desert. Finally, the

  3. Reliability Analysis and Modeling of ZigBee Networks

    Lin, Cheng-Min

    The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to

  4. The features of Drosophila core promoters revealed by statistical analysis

    Trifonov Edward N


    Full Text Available Abstract Background Experimental investigation of transcription is still a very labor- and time-consuming process. Only a few transcription initiation scenarios have been studied in detail. The mechanism of interaction between basal machinery and promoter, in particular core promoter elements, is not known for the majority of identified promoters. In this study, we reveal various transcription initiation mechanisms by statistical analysis of 3393 nonredundant Drosophila promoters. Results Using Drosophila-specific position-weight matrices, we identified promoters containing TATA box, Initiator, Downstream Promoter Element (DPE, and Motif Ten Element (MTE, as well as core elements discovered in Human (TFIIB Recognition Element (BRE and Downstream Core Element (DCE. Promoters utilizing known synergetic combinations of two core elements (TATA_Inr, Inr_MTE, Inr_DPE, and DPE_MTE were identified. We also establish the existence of promoters with potentially novel synergetic combinations: TATA_DPE and TATA_MTE. Our analysis revealed several motifs with the features of promoter elements, including possible novel core promoter element(s. Comparison of Human and Drosophila showed consistent percentages of promoters with TATA, Inr, DPE, and synergetic combinations thereof, as well as most of the same functional and mutual positions of the core elements. No statistical evidence of MTE utilization in Human was found. Distinct nucleosome positioning in particular promoter classes was revealed. Conclusion We present lists of promoters that potentially utilize the aforementioned elements/combinations. The number of these promoters is two orders of magnitude larger than the number of promoters in which transcription initiation was experimentally studied. The sequences are ready to be experimentally tested or used for further statistical analysis. The developed approach may be utilized for other species.

  5. Effectiveness and reliability analysis of emergency measures for flood prevention

    Lendering, K.T.; Jonkman, S.N.; Kok, M.


    During flood events emergency measures are used to prevent breaches in flood defences. However, there is still limited insight in their reliability and effectiveness. The objective of this paper is to develop a method to determine the reliability and effectiveness of emergency measures for flood

  6. Effectiveness and reliability analysis of emergency measures for flood prevention

    Lendering, K.T.; Jonkman, S.N.; Kok, M.


    During flood events emergency measures are used to prevent breaches in flood defences. However, there is still limited insight in their reliability and effectiveness. The objective of this paper is to develop a method to determine the reliability and effectiveness of emergency measures for flood def

  7. Procedure for conducting a human-reliability analysis for nuclear power plants. Final report

    Bell, B.J.; Swain, A.D.


    This document describes in detail a procedure to be followed in conducting a human reliability analysis as part of a probabilistic risk assessment when such an analysis is performed according to the methods described in NUREG/CR-1278, Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. An overview of the procedure describing the major elements of a human reliability analysis is presented along with a detailed description of each element and an example of an actual analysis. An appendix consists of some sample human reliability analysis problems for further study.

  8. Wind turbine reliability : a database and analysis approach.

    Linsday, James (ARES Corporation); Briand, Daniel; Hill, Roger Ray; Stinebaugh, Jennifer A.; Benjamin, Allan S. (ARES Corporation)


    The US wind Industry has experienced remarkable growth since the turn of the century. At the same time, the physical size and electrical generation capabilities of wind turbines has also experienced remarkable growth. As the market continues to expand, and as wind generation continues to gain a significant share of the generation portfolio, the reliability of wind turbine technology becomes increasingly important. This report addresses how operations and maintenance costs are related to unreliability - that is the failures experienced by systems and components. Reliability tools are demonstrated, data needed to understand and catalog failure events is described, and practical wind turbine reliability models are illustrated, including preliminary results. This report also presents a continuing process of how to proceed with controlling industry requirements, needs, and expectations related to Reliability, Availability, Maintainability, and Safety. A simply stated goal of this process is to better understand and to improve the operable reliability of wind turbine installations.

  9. Magnetic resonance imaging in laboratory petrophysical core analysis

    Mitchell, J.; Chandrasekera, T. C.; Holland, D. J.; Gladden, L. F.; Fordham, E. J.


    Magnetic resonance imaging (MRI) is a well-known technique in medical diagnosis and materials science. In the more specialized arena of laboratory-scale petrophysical rock core analysis, the role of MRI has undergone a substantial change in focus over the last three decades. Initially, alongside the continual drive to exploit higher magnetic field strengths in MRI applications for medicine and chemistry, the same trend was followed in core analysis. However, the spatial resolution achievable in heterogeneous porous media is inherently limited due to the magnetic susceptibility contrast between solid and fluid. As a result, imaging resolution at the length-scale of typical pore diameters is not practical and so MRI of core-plugs has often been viewed as an inappropriate use of expensive magnetic resonance facilities. Recently, there has been a paradigm shift in the use of MRI in laboratory-scale core analysis. The focus is now on acquiring data in the laboratory that are directly comparable to data obtained from magnetic resonance well-logging tools (i.e., a common physics of measurement). To maintain consistency with well-logging instrumentation, it is desirable to measure distributions of transverse (T2) relaxation time-the industry-standard metric in well-logging-at the laboratory-scale. These T2 distributions can be spatially resolved over the length of a core-plug. The use of low-field magnets in the laboratory environment is optimal for core analysis not only because the magnetic field strength is closer to that of well-logging tools, but also because the magnetic susceptibility contrast is minimized, allowing the acquisition of quantitative image voxel (or pixel) intensities that are directly scalable to liquid volume. Beyond simple determination of macroscopic rock heterogeneity, it is possible to utilize the spatial resolution for monitoring forced displacement of oil by water or chemical agents, determining capillary pressure curves, and estimating

  10. Advanced response surface method for mechanical reliability analysis

    L(U) Zhen-zhou; ZHAO Jie; YUE Zhu-feng


    Based on the classical response surface method (RSM), a novel RSM using improved experimental points (EPs) is presented for reliability analysis. Two novel points are included in the presented method. One is the use of linear interpolation, from which the total EPs for determining the RS are selected to be closer to the actual failure surface;the other is the application of sequential linear interpolation to control the distance between the surrounding EPs and the center EP, by which the presented method can ensure that the RS fits the actual failure surface in the region of maximum likelihood as the center EPs converge to the actual most probable point (MPP). Since the fitting precision of the RS to the actual failure surface in the vicinity of the MPP, which has significant contribution to the probability of the failure surface being exceeded, is increased by the presented method, the precision of the failure probability calculated by RS is increased as well. Numerical examples illustrate the accuracy and efficiency of the presented method.

  11. Code Coupling for Multi-Dimensional Core Transient Analysis

    Park, Jin-Woo; Park, Guen-Tae; Park, Min-Ho; Ryu, Seok-Hee; Um, Kil-Sup; Lee Jae-Il [KEPCO NF, Daejeon (Korea, Republic of)


    After the CEA ejection, the nuclear power of the reactor dramatically increases in an exponential behavior until the Doppler effect becomes important and turns the reactivity balance and power down to lower levels. Although this happens in a very short period of time, only few seconds, the energy generated can be very significant and cause fuel failures. The current safety analysis methodology which is based on overly conservative assumptions with the point kinetics model results in quite adverse consequences. Thus, KEPCO Nuclear Fuel(KNF) is developing the multi-dimensional safety analysis methodology to mitigate the consequences of the single CEA ejection accident. For this purpose, three-dimensional core neutron kinetics code ASTRA, sub-channel analysis code THALES, and fuel performance analysis code FROST, which have transient calculation performance, were coupled using message passing interface (MPI). This paper presents the methodology used for code coupling and the preliminary simulation results with the coupled code system (CHASER). Multi-dimensional core transient analysis code system, CHASER, has been developed and it was applied to simulate a single CEA ejection accident. CHASER gave a good prediction of multi-dimensional core transient behaviors during transient. In the near future, the multi-dimension CEA ejection analysis methodology using CHASER is planning to be developed. CHASER is expected to be a useful tool to gain safety margin for reactivity initiated accidents (RIAs), such as a single CEA ejection accident.

  12. Extending Failure Modes and Effects Analysis Approach for Reliability Analysis at the Software Architecture Design Level

    Sozer, Hasan; Tekinerdogan, Bedir; Aksit, Mehmet; Lemos, de Rogerio; Gacek, Cristina


    Several reliability engineering approaches have been proposed to identify and recover from failures. A well-known and mature approach is the Failure Mode and Effect Analysis (FMEA) method that is usually utilized together with Fault Tree Analysis (FTA) to analyze and diagnose the causes of failures.

  13. Size analysis of single-core magnetic nanoparticles

    Ludwig, Frank; Balceris, Christoph; Viereck, Thilo; Posth, Oliver; Steinhoff, Uwe; Gavilan, Helena; Costo, Rocio; Zeng, Lunjie; Olsson, Eva; Jonasson, Christian; Johansson, Christer


    Single-core iron-oxide nanoparticles with nominal core diameters of 14 nm and 19 nm were analyzed with a variety of non-magnetic and magnetic analysis techniques, including transmission electron microscopy (TEM), dynamic light scattering (DLS), static magnetization vs. magnetic field (M-H) measurements, ac susceptibility (ACS) and magnetorelaxometry (MRX). From the experimental data, distributions of core and hydrodynamic sizes are derived. Except for TEM where a number-weighted distribution is directly obtained, models have to be applied in order to determine size distributions from the measurand. It was found that the mean core diameters determined from TEM, M-H, ACS and MRX measurements agree well although they are based on different models (Langevin function, Brownian and Néel relaxation times). Especially for the sample with large cores, particle interaction effects come into play, causing agglomerates which were detected in DLS, ACS and MRX measurements. We observed that the number and size of agglomerates can be minimized by sufficiently strong diluting the suspension.

  14. Optimization Design and Finite Element Analysis of Core Cutter

    CAO Pin-lu; YIN Kun; PENG Jian-ming; LIU Jian-lin


    The hydro-hammer sampler is a new type of sampler compared with traditional ones. An important part of this new offshore sampler is that the structure of the core cutter has a significant effect on penetration and core recovery. In our experiments, a commercial finite element code with a capability of simulating large-strain frictional contact between two or more solid bodies is used to simulate the core cutter-soil interaction. The effects of the cutting edge shape, the diameter and the edge angle on penetration are analyzed by non-liner transient dynamic analysis using a finite element method (FEM). Simulation results show that the cutter shape clearly has an effect on the penetration and core recovery. In addition, the penetration of the sampler increases with an increase in the inside diameter of the cutter, but decreases with an increase in the cutting angle. Based on these analyses, an optimum structure of the core cutter is designed and tested in the north margin of the Dalian gulf. Experiment results show that the penetration rate is about 16.5 m/h in silty clay and 15.4 m/h in cohesive clay, while the recovery is 68% and 83.3% respectively.

  15. Spinal appearance questionnaire: factor analysis, scoring, reliability, and validity testing.

    Carreon, Leah Y; Sanders, James O; Polly, David W; Sucato, Daniel J; Parent, Stefan; Roy-Beaudry, Marjolaine; Hopkins, Jeffrey; McClung, Anna; Bratcher, Kelly R; Diamond, Beverly E


    Cross sectional. This study presents the factor analysis of the Spinal Appearance Questionnaire (SAQ) and its psychometric properties. Although the SAQ has been administered to a large sample of patients with adolescent idiopathic scoliosis (AIS) treated surgically, its psychometric properties have not been fully evaluated. This study presents the factor analysis and scoring of the SAQ and evaluates its psychometric properties. The SAQ and the Scoliosis Research Society-22 (SRS-22) were administered to AIS patients who were being observed, braced or scheduled for surgery. Standard demographic data and radiographic measures including Lenke type and curve magnitude were also collected. Of the 1802 patients, 83% were female; with a mean age of 14.8 years and mean initial Cobb angle of 55.8° (range, 0°-123°). From the 32 items of the SAQ, 15 loaded on two factors with consistent and significant correlations across all Lenke types. There is an Appearance (items 1-10) and an Expectations factor (items 12-15). Responses are summed giving a range of 5 to 50 for the Appearance domain and 5 to 20 for the Expectations domain. The Cronbach's α was 0.88 for both domains and Total score with a test-retest reliability of 0.81 for Appearance and 0.91 for Expectations. Correlations with major curve magnitude were higher for the SAQ Appearance and SAQ Total scores compared to correlations between the SRS Appearance and SRS Total scores. The SAQ and SRS-22 Scores were statistically significantly different in patients who were scheduled for surgery compared to those who were observed or braced. The SAQ is a valid measure of self-image in patients with AIS with greater correlation to curve magnitude than SRS Appearance and Total score. It also discriminates between patients who require surgery from those who do not.

  16. Reliability Modeling and Analysis of SCI Topological Network

    Hongzhe Xu


    Full Text Available The problem of reliability modeling on the Scalable Coherent Interface (SCI rings and topological network is studied. The reliability models of three SCI rings are developed and the factors which influence the reliability of SCI rings are studied. By calculating the shortest path matrix and the path quantity matrix of different types SCI network topology, the communication characteristics of SCI network are obtained. For the situations of the node-damage and edge-damage, the survivability of SCI topological network is studied.

  17. System Reliability Analysis of Redundant Condition Monitoring Systems

    YI Pengxing; HU Youming; YANG Shuzi; WU Bo; CUI Feng


    The development and application of new reliability models and methods are presented to analyze the system reliability of complex condition monitoring systems. The methods include a method analyzing failure modes of a type of redundant condition monitoring systems (RCMS) by invoking failure tree model, Markov modeling techniques for analyzing system reliability of RCMS, and methods for estimating Markov model parameters. Furthermore, a computing case is investigated and many conclusions upon this case are summarized. Results show that the method proposed here is practical and valuable for designing condition monitoring systems and their maintenance.

  18. Application of Reliability Analysis for Optimal Design of Monolithic Vertical Wall Breakwaters

    Burcharth, H. F.; Sørensen, John Dalsgaard; Christiani, E.


    Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of some of the most important failure modes are described. The failures are sliding and slip surface failure of a rubble mound and a clay foundation. Relevant design...... variables are identified and a reliability-based design optimization procedure is formulated. Results from an illustrative example are given....

  19. Reliability analysis of wind turbines exposed to dynamic loads

    Sørensen, John Dalsgaard


    . Therefore the turbine components should be designed to have sufficient reliability with respect to both extreme and fatigue loads also not be too costly (and safe). This paper presents models for uncertainty modeling and reliability assessment of especially the structural components such as tower, blades...... the reliability of the structural components. Illustrative examples are presented considering uncertainty modeling and reliability assessment for structural wind turbine components exposed to extreme loads and fatigue, respectively.......Wind turbines are exposed to highly dynamic loads that cause fatigue and extreme load effects which are subject to significant uncertainties. Further, reduction of cost of energy for wind turbines are very important in order to make wind energy competitive compared to other energy sources...

  20. Degraded core analysis for the pressurized-water reactor

    Gittus, J.H.


    An analysis of the likelihood and the consequences of 'degraded-core accidents' has been undertaken for the proposed Sizewell B PWR. In such accidents, degradation of the core geometry occurs as a result of overheating. Radionuclides are released and may enter the environment, causing harmful effects. The analysis concludes that degraded-core accidents are highly improbable, the plant having been designed to reduce the frequency of such accidents to a value of order 10/sup -6/ per year. Tbe building containing the reactor would only fail in a small proportion of degraded-core accidents. In the great majority of cases the containment would remain intact and the release of radioactivity to the environment would be small. The risk to individuals have been calculated for both immediate and long term effects. Although the estimates of risk are approximate, studies to investigate the uncertainties, and sensitivities to different assumptions, show that potential errors are small compared with the very large 'margin of safety' between the risks estimated for Sizewell B and those that already exist in society.

  1. Operation of Reliability Analysis Center (FY85-87)


    environmental conditions at the time of the reported failure as well as the exact nature of the failure. 4 The diskette format (FMDR-21A) contains...based upon the reliability and maintainability standards and tasks delineated in NAC R&M-STD-ROO010 (Reliability Program Requirements Seleccion ). These...characteristics, environmental conditions at the time of the reported failure, and the exact nature of the failure, which has been categorized as follows

  2. reliability reliability


    The design variables for the design of the sla. The design ... The presence of uncertainty in the analysis and de of engineering .... however, for certain complex elements, the methods ..... Standard BS EN 1990, CEN, European Committee for.

  3. High Resolution Continuous Flow Analysis System for Polar Ice Cores

    Dallmayr, Remi; Azuma, Kumiko; Yamada, Hironobu; Kjær, Helle Astrid; Vallelonga, Paul; Azuma, Nobuhiko; Takata, Morimasa


    In the last decades, Continuous Flow Analysis (CFA) technology for ice core analyses has been developed to reconstruct the past changes of the climate system 1), 2). Compared with traditional analyses of discrete samples, a CFA system offers much faster and higher depth resolution analyses. It also generates a decontaminated sample stream without time-consuming sample processing procedure by using the inner area of an ice-core sample.. The CFA system that we have been developing is currently able to continuously measure stable water isotopes 3) and electrolytic conductivity, as well as to collect discrete samples for the both inner and outer areas with variable depth resolutions. Chemistry analyses4) and methane-gas analysis 5) are planned to be added using the continuous water stream system 5). In order to optimize the resolution of the current system with minimal sample volumes necessary for different analyses, our CFA system typically melts an ice core at 1.6 cm/min. Instead of using a wire position encoder with typical 1mm positioning resolution 6), we decided to use a high-accuracy CCD Laser displacement sensor (LKG-G505, Keyence). At the 1.6 cm/min melt rate, the positioning resolution was increased to 0.27mm. Also, the mixing volume that occurs in our open split debubbler is regulated using its weight. The overflow pumping rate is smoothly PID controlled to maintain the weight as low as possible, while keeping a safety buffer of water to avoid air bubbles downstream. To evaluate the system's depth-resolution, we will present the preliminary data of electrolytic conductivity obtained by melting 12 bags of the North Greenland Eemian Ice Drilling (NEEM) ice core. The samples correspond to different climate intervals (Greenland Stadial 21, 22, Greenland Stadial 5, Greenland Interstadial 5, Greenland Interstadial 7, Greenland Stadial 8). We will present results for the Greenland Stadial -8, whose depths and ages are between 1723.7 and 1724.8 meters, and 35.520 to


    Zhao Jingyi; Zhuoru; Wang Yiqun


    According to the demand of high reliability of the primary cylinder of the hydraulic press,the reliability model of the primary cylinder is built after its reliability analysis.The stress of the primary cylinder is analyzed by finite element software-MARC,and the structure reliability of the cylinder based on stress-strength model is predicted,which would provide the reference to the design.

  5. Preliminary Core Analysis of a Micro Modular Reactor

    Jo, Chang Keun; Chang, Jongwa [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Venneri, Francesco [Ultra Safe Nuclear Corporation, Los Alamos (United States); Hawari, Ayman [NC State Univ., Raleigh (United States)


    The Micro Modular Reactor (MMR) will be 'melt-down proof'(MDP) under all circumstances, including the complete loss of coolant, and will be easily transportable and retrievable, and suitable for use with very little site preparation and Balance of Plant (BOP) requirements for a variety of applications, from power generation and process heat applications in remote areas to grid-unattached locations, including ship propulsion. The Micro Modular Reactor design proposed in this paper has 3 meter diameter core (2 meter active core) which is suitable for 'factory manufactured' and has few tens year of service life for remote deployment. We confirmed the feasibility of long term service life by a preliminary neutronic analysis in terms of the excess reactivity, the temperature feedback coefficient, and the control margins. We are able to achieve a reasonably long core life time of 5 ∼ 10 years under typical thermal hydraulic condition of a helium cooled reactor. However, on a situation where longer service period and safety is important, we can reduce the power density to the level of typical pebble bed reactor. In this case we can design 10 MWt MMR with core diameter for 10 ∼ 40 years core life time without much loss in the economics. Several burnable poisons are studied and it is found that erbia mixed in the compact matrix seems reasonably good poison. The temperature feedback coefficients were remaining negative during lifetime. Drum type control rods at reflector region and few control rods inside core region are sufficient to control the reactivity during operation and to achieve safe cold shutdown state.


    C.L. Liu; Z.Z. Lü; Y.L. Xu


    Reliability analysis methods based on the linear damage accumulation law (LDAL) and load-life interference model are studied in this paper. According to the equal probability rule, the equivalent loads are derived, and the reliability analysis method based on load-life interference model and recurrence formula is constructed. In conjunction with finite element analysis (FEA) program, the reliability of an aero engine turbine disk under low cycle fatigue (LCF) condition has been analyzed. The results show the turbine disk is safety and the above reliability analysis methods are feasible.

  7. Reliability Analysis for the Fatigue Limit State of the ASTRID Offshore Platform

    Vrouwenvelder, A.C.W.M.; Gostelie, E.M.


    A reliability analysis with respect to fatigue failure was performed for a concrete gravity platform designed for the Troll field. The reliability analysis was incorporated in the practical design-loop to gain more insight into the complex fatigue problem. In the analysis several parameters relating

  8. The PedsQL™ in Pediatric Patients with Spinal Muscular Atrophy: Feasibility, Reliability, and Validity of the Pediatric Quality of Life Inventory™ Generic Core Scales and Neuromuscular Module

    Iannaccone, Susan T.; Hynan, Linda S.; Morton, Anne; Buchanan, Renee; Limbers, Christine A.; Varni, James W.


    For Phase II and III clinical trials in children with Spinal Muscular Atrophy (SMA), reliable and valid outcome measures are necessary. Since 2000, the American Spinal Muscular Atrophy Randomized Trials (AmSMART) group has established reliability and validity for measures of strength, lung function, and motor function in the population from age 2 years to 18 years. The PedsQL™ (Pediatric Quality of Life Inventory™) Measurement Model was designed to integrate the relative merits of generic and disease-specific approaches, with disease-specific modules. The PedsQL™ 3.0 Neuromuscular Module was designed to measure HRQOL dimensions specific to children ages 2 to 18 years with neuromuscular disorders, including SMA. One hundred seventy-six children with SMA and their parents completed the PedsQL™ 4.0 Generic Core Scales and PedsQL™ 3.0 Neuromuscular Module. The PedsQL™ demonstrated feasibility, reliability and validity in the SMA population. Consistent with the conceptualization of disease-specific symptoms as causal indicators of generic HRQOL, the majority of intercorrelations among the Neuromuscular Module Scales and the Generic Core Scales were in the medium to large range, supporting construct validity. For the purposes of a clinical trial, the PedsQL™ Neuromuscular Module and Generic Core Scales provide an integrated measurement model with the advantages of both generic and condition-specific instruments. PMID:19846309

  9. Analysis of the core genome and pangenome of Pseudomonas putida.

    Udaondo, Zulema; Molina, Lázaro; Segura, Ana; Duque, Estrella; Ramos, Juan L


    Pseudomonas putida are strict aerobes that proliferate in a range of temperate niches and are of interest for environmental applications due to their capacity to degrade pollutants and ability to promote plant growth. Furthermore solvent-tolerant strains are useful for biosynthesis of added-value chemicals. We present a comprehensive comparative analysis of nine strains and the first characterization of the Pseudomonas putida pangenome. The core genome of P. putida comprises approximately 3386 genes. The most abundant genes within the core genome are those that encode nutrient transporters. Other conserved genes include those for central carbon metabolism through the Entner-Doudoroff pathway, the pentose phosphate cycle, arginine and proline metabolism, and pathways for degradation of aromatic chemicals. Genes that encode transporters, enzymes and regulators for amino acid metabolism (synthesis and degradation) are all part of the core genome, as well as various electron transporters, which enable aerobic metabolism under different oxygen regimes. Within the core genome are 30 genes for flagella biosynthesis and 12 key genes for biofilm formation. Pseudomonas putida strains share 85% of the coding regions with Pseudomonas aeruginosa; however, in P. putida, virulence factors such as exotoxins and type III secretion systems are absent.

  10. Methods for communication-network reliability analysis - Probabilistic graph reduction

    Shooman, Andrew M.; Kershenbaum, Aaron

    The authors have designed and implemented a graph-reduction algorithm for computing the k-terminal reliability of an arbitrary network with possibly unreliable nodes. The two contributions of the present work are a version of the delta-y transformation for k-terminal reliability and an extension of Satyanarayana and Wood's polygon to chain transformations to handle graphs with imperfect vertices. The exact algorithm is faster than or equal to that of Satyanarayana and Wood and the simple algorithm without delta-y and polygon to chain transformations for every problem considered. The exact algorithm runs in linear time on series-parallel graphs and is faster than the above-stated algorithms for huge problems which run in exponential time. The approximate algorithms reduce the computation time for the network reliability problem by two to three orders of magnitude for large problems, while providing reasonably accurate answers in most cases.

  11. Reliability Analysis of Random Vibration Transmission Path Systems

    Wei Zhao


    Full Text Available The vibration transmission path systems are generally composed of the vibration source, the vibration transfer path, and the vibration receiving structure. The transfer path is the medium of the vibration transmission. Moreover, the randomness of transfer path influences the transfer reliability greatly. In this paper, based on the matrix calculus, the generalized second moment technique, and the stochastic finite element theory, the effective approach for the transfer reliability of vibration transfer path systems was provided. The transfer reliability of vibration transfer path system with uncertain path parameters including path mass and path stiffness was analyzed theoretically and computed numerically, and the correlated mathematical expressions were derived. Thus, it provides the theoretical foundation for the dynamic design of vibration systems in practical project, so that most random path parameters can be considered to solve the random problems for vibration transfer path systems, which can avoid the system resonance failure.

  12. Windfarm Generation Assessment for Reliability Analysis of Power Systems

    Barberis Negra, Nicola; Bak-Jensen, Birgitte; Holmstrøm, O.


    Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  13. Windfarm Generation Assessment for ReliabilityAnalysis of Power Systems

    Negra, Nicola Barberis; Holmstrøm, Ole; Bak-Jensen, Birgitte


    Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  14. Technical information report: Plasma melter operation, reliability, and maintenance analysis

    Hendrickson, D.W. [ed.


    This document provides a technical report of operability, reliability, and maintenance of a plasma melter for low-level waste vitrification, in support of the Hanford Tank Waste Remediation System (TWRS) Low-Level Waste (LLW) Vitrification Program. A process description is provided that minimizes maintenance and downtime and includes material and energy balances, equipment sizes and arrangement, startup/operation/maintence/shutdown cycle descriptions, and basis for scale-up to a 200 metric ton/day production facility. Operational requirements are provided including utilities, feeds, labor, and maintenance. Equipment reliability estimates and maintenance requirements are provided which includes a list of failure modes, responses, and consequences.

  15. Reliability modeling and analysis of smart power systems

    Karki, Rajesh; Verma, Ajit Kumar


    The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti

  16. Embedded mechatronic systems 1 analysis of failures, predictive reliability

    El Hami, Abdelkhalak


    In operation, mechatronics embedded systems are stressed by loads of different causes: climate (temperature, humidity), vibration, electrical and electromagnetic. These stresses in components which induce failure mechanisms should be identified and modeled for better control. AUDACE is a collaborative project of the cluster Mov'eo that address issues specific to mechatronic reliability embedded systems. AUDACE means analyzing the causes of failure of components of mechatronic systems onboard. The goal of the project is to optimize the design of mechatronic devices by reliability. The projec

  17. Measuring health-related quality of life in children with cancer living in mainland China: feasibility, reliability and validity of the Chinese mandarin version of PedsQL 4.0 Generic Core Scales and 3.0 Cancer Module

    Ji Yi


    Full Text Available Abstract Background The Pediatric Quality of Life Inventory (PedsQL is widely used instrument to measure pediatric health-related quality of life (HRQOL for children aged 2 to 18 years. The purpose of the current study was to investigate the feasibility, reliability and validity of the Chinese mandarin version of the PedsQL 4.0 Generic Core Scales and 3.0 Cancer Module in a group of Chinese children with cancer. Methods The PedsQL 4.0 Genetic Core Scales and the PedsQL 3.0 Cancer Module were administered to children with cancer (aged 5-18 years and parents of such children (aged 2-18 years. For comparison, a survey on a demographically group-matched sample of the general population with children (aged 5-18 and parents of children (aged 2-18 years was conducted with the PedsQL 4.0 Genetic Core Scales. Result The minimal mean percentage of missing item responses (except the School Functioning scale supported the feasibility of the PedsQL 4.0 Generic Core Scales and 3.0 Cancer Module for Chinese children with cancer. Most of the scales showed satisfactory reliability with Cronbach's α of exceeding 0.70, and all scales demonstrated sufficient test-retest reliability. Assessing the clinical validity of the questionnaires, statistically significant difference was found between healthy children and children with cancer, and between children on-treatment versus off-treatment ≥12 months. Positive significant correlations were observed between the scores of the PedsQL 4.0 Generic Core Scale and the PedsQL 3.0 Cancer Module. Exploratory factor analysis demonstrated sufficient factorial validity. Moderate to good agreement was found between child self- and parent proxy-reports. Conclusion The findings support the feasibility, reliability and validity of the Chinese Mandarin version of PedsQL 4.0 Generic Core Scales and 3.0 Cancer Module in children with cancer living in mainland China.

  18. Error Analysis of High Frequency Core Loss Measurement for Low-Permeability Low-Loss Magnetic Cores

    Niroumand, Farideh Javidi; Nymand, Morten


    in magnetic cores is B-H loop measurement where two windings are placed on the core under test. However, this method is highly vulnerable to phase shift error, especially for low-permeability, low-loss cores. Due to soft saturation and very low core loss, low-permeability low-loss magnetic cores are favorable....... The analysis has been validated by experimental measurements for relatively low-loss magnetic cores with different permeability values.......Magnetic components significantly contribute to the dissipated loss in power electronic converters. Measuring the true value of dissipated power in these components is highly desirable, since it can be used to verify the optimum design of these components. The common approach for measuring the loss...

  19. Architecture-Based Reliability Analysis of Web Services

    Rahmani, Cobra Mariam


    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…

  20. Windfarm generation assessment for reliability analysis of power systems

    Negra, N.B.; Holmstrøm, O.; Bak-Jensen, B.;


    Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...

  1. Reliability analysis of common hazardous waste treatment processes

    Waters, R.D. [Vanderbilt Univ., Nashville, TN (United States)


    Five hazardous waste treatment processes are analyzed probabilistically using Monte Carlo simulation to elucidate the relationships between process safety factors and reliability levels. The treatment processes evaluated are packed tower aeration, reverse osmosis, activated sludge, upflow anaerobic sludge blanket, and activated carbon adsorption.

  2. Fiber Access Networks: Reliability Analysis and Swedish Broadband Market

    Wosinska, Lena; Chen, Jiajia; Larsen, Claus Popp

    Fiber access network architectures such as active optical networks (AONs) and passive optical networks (PONs) have been developed to support the growing bandwidth demand. Whereas particularly Swedish operators prefer AON, this may not be the case for operators in other countries. The choice depends on a combination of technical requirements, practical constraints, business models, and cost. Due to the increasing importance of reliable access to the network services, connection availability is becoming one of the most crucial issues for access networks, which should be reflected in the network owner's architecture decision. In many cases protection against failures is realized by adding backup resources. However, there is a trade off between the cost of protection and the level of service reliability since improving reliability performance by duplication of network resources (and capital expenditures CAPEX) may be too expensive. In this paper we present the evolution of fiber access networks and compare reliability performance in relation to investment and management cost for some representative cases. We consider both standard and novel architectures for deployment in both sparsely and densely populated areas. While some recent works focused on PON protection schemes with reduced CAPEX the current and future effort should be put on minimizing the operational expenditures (OPEX) during the access network lifetime.

  3. Statistical Analysis of Human Reliability of Armored Equipment

    LIU Wei-ping; CAO Wei-guo; REN Jing


    Human errors of seven types of armored equipment, which occur during the course of field test, are statistically analyzed. The human error-to-armored equipment failure ratio is obtained. The causes of human errors are analyzed. The distribution law of human errors is acquired. The ratio of human errors and human reliability index are also calculated.

  4. Exploratory factor analysis and reliability analysis with missing data: A simple method for SPSS users

    Bruce Weaver


    Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.

  5. Multivariate Regression Analysis of Gravitational Waves from Rotating Core Collapse

    Engels, William J; Ott, Christian D


    We present a new multivariate regression model for analysis and parameter estimation of gravitational waves observed from well but not perfectly modeled sources such as core-collapse supernovae. Our approach is based on a principal component decomposition of simulated waveform catalogs. Instead of reconstructing waveforms by direct linear combination of physically meaningless principal components, we solve via least squares for the relationship that encodes the connection between chosen physical parameters and the principal component basis. Although our approach is linear, the waveforms' parameter dependence may be non-linear. For the case of gravitational waves from rotating core collapse, we show, using statistical hypothesis testing, that our method is capable of identifying the most important physical parameters that govern waveform morphology in the presence of simulated detector noise. We also demonstrate our method's ability to predict waveforms from a principal component basis given a set of physical ...

  6. Ultra-sensitive Flow Injection Analysis (FIA) determination of calcium in ice cores at ppt level.

    Traversi, R; Becagli, S; Castellano, E; Maggi, V; Morganti, A; Severi, M; Udisti, R


    A Flow Injection Analysis (FIA) spectrofluorimetric method for calcium determination in ice cores was optimised in order to achieve better analytical performances which would make it suitable for reliable calcium measurements at ppt level. The method here optimised is based on the formation of a fluorescent compound between Ca and Quin-2 in buffered environment. A careful evaluation of operative parameters (reagent concentration, buffer composition and concentration, pH), influence of interfering species possibly present in real samples and potential favourable effect of surfactant addition was carried out. The obtained detection limit is around 15 ppt, which is one order of magnitude lower than the most sensitive Flow Analysis method for Ca determination currently available in literature and reproducibility is better than 4% for Ca concentrations of 0.2 ppb. The method was validated through measurements performed in parallel with Ion Chromatography on 200 samples from an alpine ice core (Lys Glacier) revealing an excellent fit between the two chemical series. Calcium stratigraphy in Lys ice core was discussed in terms of seasonal pattern and occurrence of Saharan dust events.

  7. Preventive Replacement Decisions for Dragline Components Using Reliability Analysis

    Nuray Demirel


    Full Text Available Reliability-based maintenance policies allow qualitative and quantitative evaluation of system downtimes via revealing main causes of breakdowns and discussing required preventive activities against failures. Application of preventive maintenance is especially important for mining machineries since production is highly affected from machinery breakdowns. Overburden stripping operations are one of the integral parts in surface coal mine productions. Draglines are extensively utilized in overburden stripping operations and they achieve earthmoving activities with bucket capacities up to 168 m3. The massive structure and operational severity of these machines increase the importance of performance awareness for individual working components. Research on draglines is rarely observed in the literature and maintenance studies for these earthmovers have been generally ignored. On this basis, this paper offered a comprehensive reliability assessment for two draglines currently operating in the Tunçbilek coal mine and discussed preventive replacement for wear-out components of the draglines considering cost factors.

  8. Reliability Analysis and Standardization of Spacecraft Command Generation Processes

    Meshkat, Leila; Grenander, Sven; Evensen, Ken


    center dot In order to reduce commanding errors that are caused by humans, we create an approach and corresponding artifacts for standardizing the command generation process and conducting risk management during the design and assurance of such processes. center dot The literature review conducted during the standardization process revealed that very few atomic level human activities are associated with even a broad set of missions. center dot Applicable human reliability metrics for performing these atomic level tasks are available. center dot The process for building a "Periodic Table" of Command and Control Functions as well as Probabilistic Risk Assessment (PRA) models is demonstrated. center dot The PRA models are executed using data from human reliability data banks. center dot The Periodic Table is related to the PRA models via Fault Links.

  9. Analysis on Operation Reliability of Generating Units in 2005

    Zuo Xiaowen; Chu Xue


    @@ The weighted average equivalent availability factor of thermal power units in 2005 was 92.34%, an increase of 0.64 percentage points as compared to that in 2004. The average equivalent availability factor in 2005 was 92.22%, a decrease of 0.95 percentage points as compared to that in 2004. The nationwide operation reliability of generating units in 2005 was analyzed completely in this paper.

  10. Reliability Analysis for Tunnel Supports System by Using Finite Element Method

    E. Bukaçi


    Full Text Available Reliability analysis is a method that can be used in almost any geotechnical engineering problem. Using this method requires the knowledge of parameter uncertainties, which can be expressed by their standard deviation value. By performing reliability analysis to tunnel supports design, can be obtained a range of safety factors and by using them, probability of failure can be calculated. Problem becomes more complex when this analysis is performed for numerical methods, such as Finite Element Method. This paper gives a solution to how reliability analysis can be performed to design tunnel supports, by using Point Estimate Method to calculate reliability index. As a case study, is chosen one of the energy tunnels at Fan Hydropower plant, in Rrëshen Albania. As results, values of factor of safety and probability of failure are calculated. Also some suggestions using reliability analysis with numerical methods are given.

  11. Reliability importance analysis of Markovian systems at steady state using perturbation analysis

    Phuc Do Van [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France); Barros, Anne [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France)], E-mail:; Berenguer, Christophe [Institut Charles Delaunay - FRE CNRS 2848, Systems Modeling and Dependability Group, Universite de technologie de Troyes, 12, rue Marie Curie, BP 2060-10010 Troyes cedex (France)


    Sensitivity analysis has been primarily defined for static systems, i.e. systems described by combinatorial reliability models (fault or event trees). Several structural and probabilistic measures have been proposed to assess the components importance. For dynamic systems including inter-component and functional dependencies (cold spare, shared load, shared resources, etc.), and described by Markov models or, more generally, by discrete events dynamic systems models, the problem of sensitivity analysis remains widely open. In this paper, the perturbation method is used to estimate an importance factor, called multi-directional sensitivity measure, in the framework of Markovian systems. Some numerical examples are introduced to show why this method offers a promising tool for steady-state sensitivity analysis of Markov processes in reliability studies.

  12. Reliability Analysis of Bearing Capacity of Large-Diameter Piles under Osterberg Test

    Lei Nie


    Full Text Available This study gives the reliability analysis of bearing capacity of large-diameter piles under osterberg test. The limit state equation of dimensionless random variables is utilized in the reliability analysis of vertical bearing capacity of large-diameter piles based on Osterberg loading tests. And the reliability index and the resistance partial coefficient under the current specifications are calculated using calibration method. The results show: the reliable index of large-diameter piles is correlated with the load effect ratio and is smaller than the ordinary piles; resistance partial coefficient of 1.53 is proper in design of large-diameter piles.

  13. Analysis of Syetem Reliability in Manufacturing Cell Based on Triangular Fuzzy Number

    ZHANG Caibo; HAN Botang; SUN Changsen; XU Chunjie


    Due to lacking of test-data and field-data in reliability research during the design stage of manufacturing cell system. The degree of manufacturing cell system reliability research is increased. In order to deal with the deficient data and the uncertainty occurred from analysis and judgment, the paper discussed a method for studying reliability of manufacturing cell system through the analysis of fuzzy fault tree, which was based on triangular fuzzy number. At last, calculation case indicated that it would have great significance for ascertaining reliability index, maintenance and establishing keeping strategy towards manufacturing cell system.

  14. Advanced Reactor PSA Methodologies for System Reliability Analysis and Source Term Assessment

    Grabaskas, D.; Brunett, A.; Passerini, S.; Grelle, A.; Bucknor, M.


    Beginning in 2015, a project was initiated to update and modernize the probabilistic safety assessment (PSA) of the GE-Hitachi PRISM sodium fast reactor. This project is a collaboration between GE-Hitachi and Argonne National Laboratory (Argonne), and funded in part by the U.S. Department of Energy. Specifically, the role of Argonne is to assess the reliability of passive safety systems, complete a mechanistic source term calculation, and provide component reliability estimates. The assessment of passive system reliability focused on the performance of the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedback mechanisms of the metal fuel core. The mechanistic source term assessment attempted to provide a sequence specific source term evaluation to quantify offsite consequences. Lastly, the reliability assessment focused on components specific to the sodium fast reactor, including electromagnetic pumps, intermediate heat exchangers, the steam generator, and sodium valves and piping.

  15. Neutronic analysis of LMFBRs during severe core disruptive accidents

    Tomlinson, E.T.


    A number of numerical experiments were performed to assess the validity of diffusion theory and various perturbation methods for calculating the reactivity state of a severely disrupted liquid metal cooled fast breeder reactor (LMFBR). The disrupted configurations correspond, in general, to phases through which an LMFBR core could pass during a core disruptive accident (CDA). Two-reactor models were chosen for this study, the two zone, homogeneous Clinch River Breeder Reactor and the Large Heterogeneous Reactor Design Study Core. The various phases were chosen to approximate the CDA results predicted by the safety analysis code SAS3D. The calculational methods investigated in this study include the eigenvalue difference technique based on both discrete ordinate transport theory and diffusion theory, first-order perturbation theory, exact perturbation theory, and a new hybrid perturbation theory. Selected cases were analyzed using Monte Carlo methods. It was found that in all cases, diffusion theory and perturbation theory yielded results for the change in reactivity that significantly disagreed with both the discrete ordinate and Monte Carlo results. These differences were, in most cases, in a nonconservative direction.

  16. A survey on reliability and safety analysis techniques of robot systems in nuclear power plants

    Eom, H.S.; Kim, J.H.; Lee, J.C.; Choi, Y.R.; Moon, S.S


    The reliability and safety analysis techniques was surveyed for the purpose of overall quality improvement of reactor inspection system which is under development in our current project. The contents of this report are : 1. Reliability and safety analysis techniques suvey - Reviewed reliability and safety analysis techniques are generally accepted techniques in many industries including nuclear industry. And we selected a few techniques which are suitable for our robot system. They are falut tree analysis, failure mode and effect analysis, reliability block diagram, markov model, combinational method, and simulation method. 2. Survey on the characteristics of robot systems which are distinguished from other systems and which are important to the analysis. 3. Survey on the nuclear environmental factors which affect the reliability and safety analysis of robot system 4. Collection of the case studies of robot reliability and safety analysis which are performed in foreign countries. The analysis results of this survey will be applied to the improvement of reliability and safety of our robot system and also will be used for the formal qualification and certification of our reactor inspection system.

  17. The reliability of ultrasound-guided core needle biopsy in the evaluation of non-palpable solid breast lesions using 18-gauge needles

    Lim, Sung Chul; Kim, Young Sook [Chosun University College of Medicine, Gwangju (Korea, Republic of); Sneige, Nour [The University of Texas M.D. Andreson Carcer Canter, Houston (United States)


    Ultrasound-guided core needle biopsy (US CNB) is increasingly used in the histologic evaluation of non-palpable solid breast lesions. We retrospectively investigated the diagnostic accuracy of this technique, using an 18-gauge needle in 422 non-palpable breast lesions. 583 female patients with an average age 56 (range, 22-90) years underwent 590 US CNBs. Between January 1994 and December 1999, using 18-gauge needles, an average of four cores per lesion was obtained. Three hundred and eighty-five lesions were subsequently surgically excised; for 14 of these, the pathologic diagnosis was breast carcinoma metastasis, while 23 with benign diagnoses were clinically followed up for {>=}2.5 years and were considered for analysis. Of the 422 lesions, 340 (80.6%) were malignant [308 invasive, 24 ductal carcinoma in situ (DCIS), 7 DCIS with undetermined invasion and 1 DCIS vs. lobular carcinoma in situ], 67 (15.9%) were benign [30 fibroadenoma (FA) and 37 other diagnoses], and five (1.2%) were fibroepithelial lesions. The remaining ten samples (2,4%) included six cases of atypical ductal hyperplasia (ADH), two of atypical hyperplasia (AH), and two of lobular neoplasia. The sensitivity, specificity, positive predictive value, and negative predictive value of CNBs were 99%, 100%, 100%, and 96%, respectively. Two cases of invasive carcinoma were missed at CNB; there was no false-positive diagnosis. Five of six ADHs and one of two AHs were found to be carcinomas (3 DCIS and 3 infiltrating duct carcinomas). Sixteen of 24 (66.7%) cases of DCIS were found at excision to be invasion carcinomas. Of 31 FAs, two (6.5%) were found to be low-grade phyllodes tumor (PT). The five fibroepithelial lesions were shown at excision to be either PT (n=4) or FA (n=1). US CNB using an 18-gauge needle is a safe and reliable means of diagnosing breast carcinoma. Because of the high prevalence of ductal carcinoma is these lesions; findings of ADH/AH at US CNB indicate that surgical excision is needed

  18. Acquisition and statistical analysis of reliability data for I and C parts in plant protection system

    Lim, T. J.; Byun, S. S.; Han, S. H.; Lee, H. J.; Lim, J. S.; Oh, S. J.; Park, K. Y.; Song, H. S. [Soongsil Univ., Seoul (Korea)


    This project has been performed in order to construct I and C part reliability databases for detailed analysis of plant protection system and to develop a methodology for analysing trip set point drifts. Reliability database for the I and C parts of plant protection system is required to perform the detailed analysis. First, we have developed an electronic part reliability prediction code based on MIL-HDBK-217F. Then we have collected generic reliability data for the I and C parts in plant protection system. Statistical analysis procedure has been developed to process the data. Then the generic reliability database has been constructed. We have also collected plant specific reliability data for the I and C parts in plant protection system for YGN 3,4 and UCN 3,4 units. Plant specific reliability database for I and C parts has been developed by the Bayesian procedure. We have also developed an statistical analysis procedure for set point drift, and performed analysis of drift effects for trip set point. The basis for the detailed analysis can be provided from the reliability database for the PPS I and C parts. The safety of the KSNP and succeeding NPPs can be proved by reducing the uncertainty of PSA. Economic and efficient operation of NPP can be possible by optimizing the test period to reduce utility's burden. 14 refs., 215 figs., 137 tabs. (Author)

  19. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    Taheriyoun, Masoud; Moradinejad, Saber


    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  20. Non-probabilistic fuzzy reliability analysis of pile foundation stability by interval theory


    Randomness and fuzziness are among the attributes of the influential factors for stability assessment of pile foundation.According to these two characteristics, the triangular fuzzy number analysis approach was introduced to determine the probability-distributed function of mechanical parameters. Then the functional function of reliability analysis was constructed based on the study of bearing mechanism of pile foundation, and the way to calculate interval values of the functional function was developed by using improved interval-truncation approach and operation rules of interval numbers. Afterwards, the non-probabilistic fuzzy reliability analysis method was applied to assessing the pile foundation, from which a method was presented for nonprobabilistic fuzzy reliability analysis of pile foundation stability by interval theory. Finally, the probability distribution curve of nonprobabilistic fuzzy reliability indexes of practical pile foundation was concluded. Its failure possibility is 0.91%, which shows that the pile foundation is stable and reliable.

  1. Structural Reliability Analysis for Implicit Performance with Legendre Orthogonal Neural Network Method

    Lirong Sha; Tongyu Wang


    In order to evaluate the failure probability of a complicated structure, the structural responses usually need to be estimated by some numerical analysis methods such as finite element method ( FEM) . The response surface method ( RSM) can be used to reduce the computational effort required for reliability analysis when the performance functions are implicit. However, the conventional RSM is time⁃consuming or cumbersome if the number of random variables is large. This paper proposes a Legendre orthogonal neural network ( LONN)⁃based RSM to estimate the structural reliability. In this method, the relationship between the random variables and structural responses is established by a LONN model. Then the LONN model is connected to a reliability analysis method, i.e. first⁃order reliability methods (FORM) to calculate the failure probability of the structure. Numerical examples show that the proposed approach is applicable to structural reliability analysis, as well as the structure with implicit performance functions.

  2. Reliability and Sensitivity Analysis of Cast Iron Water Pipes for Agricultural Food Irrigation

    Yanling Ni


    Full Text Available This study aims to investigate the reliability and sensitivity of cast iron water pipes for agricultural food irrigation. The Monte Carlo simulation method is used for fracture assessment and reliability analysis of cast iron pipes for agricultural food irrigation. Fracture toughness is considered as a limit state function for corrosion affected cast iron pipes. Then the influence of failure mode on the probability of pipe failure has been discussed. Sensitivity analysis also is carried out to show the effect of changing basic parameters on the reliability and life time of the pipe. The analysis results show that the applied methodology can consider different random variables for estimating of life time of the pipe and it can also provide scientific guidance for rehabilitation and maintenance plans for agricultural food irrigation. In addition, the results of the failure and reliability analysis in this study can be useful for designing of more reliable new pipeline systems for agricultural food irrigation.

  3. Computation system for nuclear reactor core analysis. [LMFBR

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.; Petrie, L.M.


    This report documents a system which contains computer codes as modules developed to evaluate nuclear reactor core performance. The diffusion theory approximation to neutron transport may be applied with the VENTURE code treating up to three dimensions. The effect of exposure may be determined with the BURNER code, allowing depletion calculations to be made. The features and requirements of the system are discussed and aspects common to the computational modules, but the latter are documented elsewhere. User input data requirements, data file management, control, and the modules which perform general functions are described. Continuing development and implementation effort is enhancing the analysis capability available locally and to other installations from remote terminals.

  4. Eigenvalue analysis using a full-core Monte Carlo method

    Okafor, K.C.; Zino, J.F. (Westinghouse Savannah River Co., Aiken, SC (United States))


    The reactor physics codes used at the Savannah River Site (SRS) to predict reactor behavior have been continually benchmarked against experimental and operational data. A particular benchmark variable is the observed initial critical control rod position. Historically, there has been some difficulty predicting this position because of the difficulties inherent in using computer codes to model experimental or operational data. The Monte Carlo method is applied in this paper to study the initial critical control rod positions for the SRS K Reactor. A three-dimensional, full-core MCNP model of the reactor was developed for this analysis.

  5. Reliability of the ATD Angle in Dermatoglyphic Analysis.

    Brunson, Emily K; Hohnan, Darryl J; Giovas, Christina M


    The "ATD" angle is a dermatoglyphic trait formed by drawing lines between the triradii below the first and last digits and the most proximal triradius on the hypothenar region of the palm. This trait has been widely used in dermatoglyphic studies, but several researchers have questioned its utility, specifically whether or not it can be measured reliably. The purpose of this research was to examine the measurement reliability of this trait. Finger and palm prints were taken using the carbon paper and tape method from the right and left hands of 100 individuals. Each "ATD" angle was read twice, at different times, by Reader A, using a goniometer and a magnifying glass, and three times by a Reader B, using Adobe Photoshop. Inter-class correlation coefficients were estimated for the intra- and inter-reader measurements of the "ATD" angles. Reader A was able to quantify ATD angles on 149 out of 200 prints (74.5%), and Reader B on 179 out of 200 prints (89.5%). Both readers agreed on whether an angle existed on a print 89.8% of the time for the right hand and 78.0% for the left. Intra-reader correlations were 0.97 or greater for both readers. Inter-reader correlations for "ATD" angles measured by both readers ranged from 0.92 to 0.96. These results suggest that the "ATD" angle can be measured reliably, and further imply that measurement using a software program may provide an advantage over other methods.

  6. Windfarm Generation Assessment for Reliability Analysis of Power Systems

    Barberis Negra, Nicola; Bak-Jensen, Birgitte; Holmstrøm, O.


    a significant role in this assessment and different models have been created for it, but a representation which includes all of them has not been developed yet. This paper deals with this issue. First, a list of nine influencing Factors is presented and discussed. Secondly, these Factors are included...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  7. Windfarm Generation Assessment for ReliabilityAnalysis of Power Systems

    Negra, Nicola Barberis; Holmstrøm, Ole; Bak-Jensen, Birgitte


    a significant role in this assessment and different models have been created for it, but a representation which includes all of them has not been developed yet. This paper deals with this issue. First, a list of nine influencing Factors is presented and discussed. Secondly, these Factors are included...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  8. Reliability Analysis of Timber Structures through NDT Data Upgrading

    Sousa, Hélder; Sørensen, John Dalsgaard; Kirkegaard, Poul Henning

    for reliability calculation. In chapter 4, updating methods are conceptualized and defined. Special attention is drawn upon Bayesian methods and its implementation. Also a topic for updating based in inspection of deterioration is provided. State of the art definitions and proposed measurement indices......The first part of this document presents, in chapter 2, a description of timber characteristics and common used NDT and MDT for timber elements. Stochastic models for timber properties and damage accumulation models are also referred. According to timber’s properties a framework is proposed...

  9. A disjoint algorithm for seismic reliability analysis of lifeline networks


    The algorithm is based on constructing a disjoin kg t set of the minimal paths in a network system. In this paper,cubic notation was used to describe the logic function of a network in a well-balanced state, and then the sharp-product operation was used to construct the disjoint minimal path set of the network. A computer program has been developed, and when combined with decomposition technology, the reliability of a general lifeline network can be effectively and automatically calculated.

  10. Reliability and maintenance analysis of the CERN PS booster

    Staff, P S B


    The PS Booster Synchrotron being a complex accelerator with four superposed rings and substantial additional equipment for beam splitting and recombination, doubts were expressed at the time of project authorization as to its likely operational reliability. For 1975 and 1976, the average down time was 3.2% (at least one ring off) or 1.5% (all four rings off). The items analysed are: operational record, design features, maintenance, spare parts policy, operating temperature, effects of thunderstorms, fault diagnostics, role of operations staff and action by experts. (15 refs).

  11. Reliability analysis of the bulk cargo loading system including dependent components

    Blokus-Roszkowska, Agnieszka


    In the paper an innovative approach to the reliability analysis of multistate series-parallel systems assuming their components' dependency is presented. The reliability function of a multistate series system with components dependent according to the local load sharing rule is determined. Linking these results for series systems with results for parallel systems with independent components, we obtain the reliability function of a multistate series-parallel system assuming dependence of components' departures from the reliability states subsets in series subsystem and assuming independence between these subsystems. As a particular case, the reliability function of a multistate series-parallel system composed of dependent components having exponential reliability functions is fixed. Theoretical results are applied practically to the reliability evaluation of a bulk cargo transportation system, which main area is to load bulk cargo on board the ships. The reliability function and other reliability characteristics of the loading system are determined in case its components have exponential reliability functions with interdependent departures rates from the subsets of their reliability states. Finally, the obtained results are compared with results for the bulk cargo transportation system composed of independent components.

  12. Using a Hybrid Cost-FMEA Analysis for Wind Turbine Reliability Analysis

    Nacef Tazi


    Full Text Available Failure mode and effects analysis (FMEA has been proven to be an effective methodology to improve system design reliability. However, the standard approach reveals some weaknesses when applied to wind turbine systems. The conventional criticality assessment method has been criticized as having many limitations such as the weighting of severity and detection factors. In this paper, we aim to overcome these drawbacks and develop a hybrid cost-FMEA by integrating cost factors to assess the criticality, these costs vary from replacement costs to expected failure costs. Then, a quantitative comparative study is carried out to point out average failure rate, main cause of failure, expected failure costs and failure detection techniques. A special reliability analysis of gearbox and rotor-blades are presented.

  13. Investigation for Ensuring the Reliability of the MELCOR Analysis Results

    Sung, Joonyoung; Maeng, Yunhwan; Lee, Jaeyoung [Handong Global Univ., Pohang (Korea, Republic of)


    Flow rate could be also main factor to be proven because it is in charge of a role which takes thermal balance through heat transfer in inner side of fuel assembly. Some problems about a reliability of MELCOR results could be posed in the 2{sup nd} technical report of NSRC project. In order to confirm whether MELCOR results are dependable, experimental data of Sandia Fuel Project 1 phase were used to be compared to be a reference. In Spent Fuel Pool (SFP) severe accident, especially in case of boil-off, partial loss of coolant accident, and complete loss of coolant accident; heat source and flow rate could be main points to analyze the MELCOR results. Heat source might be composed as decay heat and oxidation heat. Because heat source makes it possible to lead a zirconium fire situation if it is satisfied that heat accumulates in spent fuel rod and then cladding temperature could be raised continuously to be generated an oxidation heat, this might be a main factor to be confirmed. This work was proposed to investigate reliability of MELCOR results in order to confirm physical phenomena if SFP severe accident is occurred. Almost results showed that MELCOR results were significantly different by minute change of main parameter in identical condition. Therefore it could be necessary that oxidation coefficients have to be chosen as value to delineate real phenomena as possible.

  14. Reliability analysis on a shell and tube heat exchanger

    Lingeswara, S.; Omar, R.; Mohd Ghazi, T. I.


    A shell and tube heat exchanger reliability was done in this study using past history data from a carbon black manufacturing plant. The heat exchanger reliability study is vital in all related industries as inappropriate maintenance and operation of the heat exchanger will lead to major Process Safety Events (PSE) and loss of production. The overall heat exchanger coefficient/effectiveness (Uo) and Mean Time between Failures (MTBF) were analyzed and calculated. The Aspen and down time data was taken from a typical carbon black shell and tube heat exchanger manufacturing plant. As a result of the Uo calculated and analyzed, it was observed that the Uo declined over a period caused by severe fouling and heat exchanger limitation. This limitation also requires further burn out period which leads to loss of production. The MTBF calculated is 649.35 hours which is very low compared to the standard 6000 hours for the good operation of shell and tube heat exchanger. The guidelines on heat exchanger repair, preventive and predictive maintenance was identified and highlighted for better heat exchanger inspection and repair in the future. The fouling of heat exchanger and the production loss will be continuous if proper heat exchanger operation and repair using standard operating procedure is not followed.

  15. Methodology for reliability allocation based on fault tree analysis and dualistic contrast

    TONG Lili; CAO Xuewu


    Reliability allocation is a difficult multi-objective optimization problem.This paper presents a methodology for reliability allocation that can be applied to determine the reliability characteristics of reactor systems or subsystems.The dualistic contrast,known as one of the most powerful tools for optimization problems,is applied to the reliability allocation model of a typical system in this article.And the fault tree analysis,deemed to be one of the effective methods of reliability analysis,is also adopted.Thus a failure rate allocation model based on the fault tree analysis and dualistic contrast is achieved.An application on the emergency diesel generator in the nuclear power plant is given to illustrate the proposed method.

  16. 助产士核心胜任力量表信度和效度研究%Midwife Core Competency Scale: Reliability and validity assessment

    王德慧; 陆虹; 孙红


    目的:对助产士核心胜任力量表进行信度和效度的检测.方法:采用文献回顾的方法,重点参考国际助产联盟制定的助产士胜任力标准,通过助产专业的专家,形成助产士核心胜任力量表,并对北京市19家医院的300名助产士进行测评,对量表进行信度和效度分析,最终形成量表.结果:有效量表295份.助产士核心胜任力量表共由6个维度,54项条目组成,其内部一致性Cronbach'sα系数为0.978,各分维度的Cronbach'sα系数为0.921 ~ 0.938之间,均在0.9以上,总量表的内容效度比为0.95,结构效度6个因子的累计解释变量为70.927%,均在测量学可接受的范围.结论:该助产士核心胜任力量表具有良好的信度和效度,条目设置适用于我国助产士核心胜任力的评价.%Objective: To develop a scale to assess the midwife's core competency and to test the reliability and validity of Midwife Core Competency Scale. Methods: We developed Midwife Core Competency Scale through literature interview whose key points were the essential competencies for midwifery practice formulated by International Confederation of Midwives (ICM), through the midwifery experts consultation, through the investigation of 300 midwives from nineteen hospital in Beijing, and through the assessment of the reliability and validity of the questionnaire. Results: The number of effective scale was 295. The Scale comprised 6 dimensions and 54 items. The internal consistency Cronbach's a coefficient was 0.978.The content validity index was 0.95. The construct validity yielded six factors with an cumulative explained variance of 70.927% which was in the acceptable range. Conclusions: Midwife Core Competency Scale can be considered a reliable and valid scale for assessing midwifes' core competency.

  17. Analysis of three-phase power transformer laminated magnetic core designs

    M.I. Levin


    Full Text Available Analysis and research into properties and parameters of different-type laminated magnetic cores of three-phase power transformers are conducted. Most of new laminated magnetic core designs are found to have significant shortcomings resulted from design and technological features of their manufacturing. These shortcomings cause increase in ohmic loss in the magnetic core, which eliminates advantages of the new core configurations and makes them uncompetitive as compared with the classical laminated magnetic core design.

  18. Reliablity analysis of gravity dams by response surface method

    Humar, Nina; Kryžanowski, Andrej; Brilly, Mitja; Schnabl, Simon


    A dam failure is one of the most important problems in dam industry. Since the mechanical behavior of dams is usually a complex phenomenon existing classical mathematical models are generally insufficient to adequately predict the dam failure and thus the safety of dams. Therefore, numerical reliability methods are often used to model such a complex mechanical phenomena. Thus, the main purpose of the present paper is to present the response surface method as a powerful mathematical tool used to study and foresee the dam safety considering a set of collected monitoring data. The derived mathematical model is applied to a case study, the Moste dam, which is the highest concrete gravity dam in Slovenia. Based on the derived model, the ambient/state variables are correlated with the dam deformation in order to gain a forecasting tool able to define the critical thresholds for dam management.

  19. Reliability of three-dimensional gait analysis in cervical spondylotic myelopathy.

    McDermott, Ailish


    Gait impairment is one of the primary symptoms of cervical spondylotic myelopathy (CSM). Detailed assessment is possible using three-dimensional gait analysis (3DGA), however the reliability of 3DGA for this population has not been established. The aim of this study was to evaluate the test-retest reliability of temporal-spatial, kinematic and kinetic parameters in a CSM population.


    R.K. Agnihotri


    Full Text Available The present paper deals with the reliability analysis of a system of boiler used in garment industry.The system consists of a single unit of boiler which plays an important role in garment industry. Usingregenerative point technique with Markov renewal process various reliability characteristics of interest areobtained.

  1. Convergence among Data Sources, Response Bias, and Reliability and Validity of a Structured Job Analysis Questionnaire.

    Smith, Jack E.; Hakel, Milton D.


    Examined are questions pertinent to the use of the Position Analysis Questionnaire: Who can use the PAQ reliably and validly? Must one rely on trained job analysts? Can people having no direct contact with the job use the PAQ reliably and validly? Do response biases influence PAQ responses? (Author/KC)

  2. Transient Analysis of Air-Core Coils by Moment Method

    Fujita, Akira; Kato, Shohei; Hirai, Takao; Okabe, Shigemitu

    In electric power system a threat of lighting surge is decreased by using ground wire and arrester, but the risk of failure of transformer is still high. Winding is the most familiar conductor configuration of electromagnetic field components such as transformer, resistors, reactance device etc. Therefore, it is important that we invest the lighting surge how to advance into winding, but the electromagnet coupling in a winding makes lighting surge analysis difficult. In this paper we present transient characteristics analysis of an air-core coils by moment method in frequency domain. We calculate the inductance from time response and impedance in low frequency, and compare them with the analytical equation which is based on Nagaoka factor.

  3. Developing engineering design core competences through analysis of industrial products

    Hansen, Claus Thorp; Lenau, Torben Anker


    Most product development work carried out in industrial practice is characterised by being incremental, i.e. the industrial company has had a product in production and on the market for some time, and now time has come to design a new and upgraded variant. This type of redesign project requires...... that the engineering designers have core design competences to carry through an analysis of the existing product encompassing both a user-oriented side and a technical side, as well as to synthesise solution proposals for the new and upgraded product. The authors of this paper see an educational challenge in staging...... a course module, in which students develop knowledge, understanding and skills, which will prepare them for being able to participate in and contribute to redesign projects in industrial practice. In the course module Product Analysis and Redesign that has run for 8 years we have developed and refined...

  4. Risk and reliability analysis theory and applications : in honor of Prof. Armen Der Kiureghian


    This book presents a unique collection of contributions from some of the foremost scholars in the field of risk and reliability analysis. Combining the most advanced analysis techniques with practical applications, it is one of the most comprehensive and up-to-date books available on risk-based engineering. All the fundamental concepts needed to conduct risk and reliability assessments are covered in detail, providing readers with a sound understanding of the field and making the book a powerful tool for students and researchers alike. This book was prepared in honor of Professor Armen Der Kiureghian, one of the fathers of modern risk and reliability analysis.

  5. Reliability analysis of repairable systems using system dynamics modeling and simulation

    Srinivasa Rao, M.; Naikan, V. N. A.


    Repairable standby system's study and analysis is an important topic in reliability. Analytical techniques become very complicated and unrealistic especially for modern complex systems. There have been attempts in the literature to evolve more realistic techniques using simulation approach for reliability analysis of systems. This paper proposes a hybrid approach called as Markov system dynamics (MSD) approach which combines the Markov approach with system dynamics simulation approach for reliability analysis and to study the dynamic behavior of systems. This approach will have the advantages of both Markov as well as system dynamics methodologies. The proposed framework is illustrated for a standby system with repair. The results of the simulation when compared with that obtained by traditional Markov analysis clearly validate the MSD approach as an alternative approach for reliability analysis.


    Yao Chengyu; Zhao Jingyi


    To overcome the design limitations of traditional hydraulic control system for synthetic rubber press and such faults as high fault rate, low reliability, high energy-consuming and which always led to shutting down of post-treatment product line for synthetic rubber, brand-new hydraulic system combining with PC control and two-way cartridge valves for the press is developed, whose reliability is analyzed, reliability model of the hydraulic system for the press is established by analyzing processing steps, and reliability simulation of each step and the whole system is carried out by software MATLAB, which is verified through reliability test. The fixed time test has proved not that theory analysis is sound, but the system has characteristics of reasonable design and high reliability,and can lower the required power supply and operational energy cost.

  7. Low Carbon-Oriented Optimal Reliability Design with Interval Product Failure Analysis and Grey Correlation Analysis

    Yixiong Feng


    Full Text Available The problem of large amounts of carbon emissions causes wide concern across the world, and it has become a serious threat to the sustainable development of the manufacturing industry. The intensive research into technologies and methodologies for green product design has significant theoretical meaning and practical value in reducing the emissions of the manufacturing industry. Therefore, a low carbon-oriented product reliability optimal design model is proposed in this paper: (1 The related expert evaluation information was prepared in interval numbers; (2 An improved product failure analysis considering the uncertain carbon emissions of the subsystem was performed to obtain the subsystem weight taking the carbon emissions into consideration. The interval grey correlation analysis was conducted to obtain the subsystem weight taking the uncertain correlations inside the product into consideration. Using the above two kinds of subsystem weights and different caution indicators of the decision maker, a series of product reliability design schemes is available; (3 The interval-valued intuitionistic fuzzy sets (IVIFSs were employed to select the optimal reliability and optimal design scheme based on three attributes, namely, low carbon, correlation and functions, and economic cost. The case study of a vertical CNC lathe proves the superiority and rationality of the proposed method.

  8. 核心自我评价的结构验证及其量表修订%Reliability, Validation and Construct Confirmatory of Core Self-Evaluations Scale

    杜建政; 张翔; 赵燕


    were eliminated and 526 usable surveys were obtained. We measured core self-evaluations with twelve items from Judge's (1997) Core Self-Evaluations Scale. The English version was, firstly, translated into Chinese, and then the Chinese one was back-translated into English. Both translate revealed few problems with the meaning of the scale. Besides, we revised the Chi- nese version on the base of interview from ten students and evaluation from two psychology professors, we also measured life satisfaction with five-item Satisfaction With Life Scale (Diner, Emmons, et al. 1985). This study conducted all statistical analyses using SPSSll.5 to test reliability and validation of two scales and Amos4.0 to test the hypothesized structure and alternative structure of the core self-evaluations con- cept. The internal consistency reliability of the scale was 0.83, the split-half reliability was 0.84 and the test-retest reliability was 0.82(three weeks). At the same time the correlation between item score and scale score showed that the scale has good item discrimination. Correlation analysis showed that there was a pos- itive and strong relation between core self-evaluations and life satisfaction, and the correlation coefficient was 0.476 (p〈0.001). This research used the cross-validation method. Exploratory factor analysis that used half of date demonstrated that the result is rational when extracted one factor in items level, but the factor analysis result got better when the item 3 and item 9 were deleted. Confirmatory factor analysis result that used another half of date suggested that the fit statistics represented a good fit of the hypothesized model to the data and the hypothesized model better than other alternative model.This study was the first time re- vised and tested the Core Self-Evaluations Scale in China. The results were consistent with other studies conducted in West Country except for a few items. From the standpoint of psychology research

  9. Reactor scram experience for shutdown system reliability analysis. [BWR; PWR

    Edison, G.E.; Pugliese, S.L.; Sacramo, R.F.


    Scram experience in a number of operating light water reactors has been reviewed. The date and reactor power of each scram was compiled from monthly operating reports and personal communications with the operating plant personnel. The average scram frequency from ''significant'' power (defined as P/sub trip//P/sub max greater than/ approximately 20 percent) was determined as a function of operating life. This relationship was then used to estimate the total number of reactor trips from above approximately 20 percent of full power expected to occur during the life of a nuclear power plant. The shape of the scram frequency vs. operating life curve resembles a typical reliability bathtub curve (failure rate vs. time), but without a rising ''wearout'' phase due to the lack of operating data near the end of plant design life. For this case the failures are represented by ''bugs'' in the plant system design, construction, and operation which lead to scram. The number of scrams would appear to level out at an average of around three per year; the standard deviations from the mean value indicate an uncertainty of about 50 percent. The total number of scrams from significant power that could be expected in a plant designed for a 40-year life would be about 130 if no wearout phase develops near the end of life.

  10. Core Stability in Athletes: A Critical Analysis of Current Guidelines.

    Wirth, Klaus; Hartmann, Hagen; Mickel, Christoph; Szilvas, Elena; Keiner, Michael; Sander, Andre


    Over the last two decades, exercise of the core muscles has gained major interest in professional sports. Research has focused on injury prevention and increasing athletic performance. We analyzed the guidelines for so-called functional strength training for back pain prevention and found that programs were similar to those for back pain rehabilitation; even the arguments were identical. Surprisingly, most exercise specifications have neither been tested for their effectiveness nor compared with the load specifications normally used for strength training. Analysis of the scientific literature on core stability exercises shows that adaptations in the central nervous system (voluntary activation of trunk muscles) have been used to justify exercise guidelines. Adaptations of morphological structures, important for the stability of the trunk and therefore the athlete's health, have not been adequately addressed in experimental studies or in reviews. In this article, we explain why the guidelines created for back pain rehabilitation are insufficient for strength training in professional athletes. We critically analyze common concepts such as 'selective activation' and training on unstable surfaces.

  11. Intraobserver and intermethod reliability for using two different computer programs in preoperative lower limb alignment analysis

    Mohamed Kenawey


    Conclusion: Computer assisted lower limb alignment analysis is reliable whether using graphics editing program or specialized planning software. However slight higher variability for angles away from the knee joint can be expected.

  12. Reliability of 3D upper limb motion analysis in children with obstetric brachial plexus palsy.

    Mahon, Judy; Malone, Ailish; Kiernan, Damien; Meldrum, Dara


    Kinematics, measured by 3D upper limb motion analysis (3D-ULMA), can potentially increase understanding of movement patterns by quantifying individual joint contributions. Reliability in children with obstetric brachial plexus palsy (OBPP) has not been established.

  13. Analysis methods for structure reliability of piping components

    Schimpfke, T.; Grebner, H.; Sievers, J. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Koeln (Germany)


    In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)

  14. Use of Fault Tree Analysis for Automotive Reliability and Safety Analysis

    Lambert, H


    Fault tree analysis (FTA) evolved from the aerospace industry in the 1960's. A fault tree is deductive logic model that is generated with a top undesired event in mind. FTA answers the question, ''how can something occur?'' as opposed to failure modes and effects analysis (FMEA) that is inductive and answers the question, ''what if?'' FTA is used in risk, reliability and safety assessments. FTA is currently being used by several industries such as nuclear power and chemical processing. Typically the automotive industries uses failure modes and effects analysis (FMEA) such as design FMEAs and process FMEAs. The use of FTA has spread to the automotive industry. This paper discusses the use of FTA for automotive applications. With the addition automotive electronics for various applications in systems such as engine/power control, cruise control and braking/traction, FTA is well suited to address failure modes within these systems. FTA can determine the importance of these failure modes from various perspectives such as cost, reliability and safety. A fault tree analysis of a car starting system is presented as an example.

  15. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    Boring, Ronald Laurids [Idaho National Lab. (INL), Idaho Falls, ID (United States); Shirley, Rachel Elizabeth [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States)


    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  16. Uncertainty analysis with reliability techniques of fluvial hydraulic simulations

    Oubennaceur, K.; Chokmani, K.; Nastev, M.


    Flood inundation models are commonly used to simulate hydraulic and floodplain inundation processes, prerequisite to successful floodplain management and preparation of appropriate flood risk mitigation plans. Selecting statistically significant ranges of the variables involved in the inundation modelling is crucial for the model performance. This involves various levels of uncertainty, which due to the cumulative nature can lead to considerable uncertainty in the final results. Therefore, in addition to the validation of the model results, there is a need for clear understanding and identifying sources of uncertainty and for measuring the model uncertainty. A reliability approach called Point-Estimate Method is presented to quantify uncertainty effects of the input data and to calculate the propagation of uncertainty in the inundation modelling process. The Point Estimate Method is a special case of numerical quadrature based on orthogonal polynomials. It allows to evaluate the low order of performance functions of independent random variables such the water depth. The variables considered in the analyses include elevation data, flow rate and Manning's coefficient n given with their own probability distributions. The approach is applied to a 45 km reach of the Richelieu River, Canada, between Rouses point and Fryers Rapids. The finite element hydrodynamic model H2D2 was used to solve the shallow water equations (SWE) and provide maps of expected water depths associated spatial distributions of standard deviations as a measure of uncertainty. The results indicate that for the simulated flow rates of 1113, 1206, and 1282, the uncertainties in water depths have a range of 25 cm, 30cm, and 60 cm, respectively. This kind of information is useful information for decision-making framework risk management in the context flood risk assessment.

  17. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    Nikulin, M; Mesbah, M; Limnios, N


    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  18. Reliability analysis of a gravity-based foundation for wind turbines

    Vahdatirad, Mohammad Javad; Griffiths, D. V.; Andersen, Lars Vabbersgaard


    Deterministic code-based designs proposed for wind turbine foundations, are typically biased on the conservative side, and overestimate the probability of failure which can lead to higher than necessary construction cost. In this study reliability analysis of a gravity-based foundation concerning...... technique to perform the reliability analysis. The calibrated code-based design approach leads to savings of up to 20% in the concrete foundation volume, depending on the target annual reliability level. The study can form the basis for future optimization on deterministic-based designs for wind turbine...... foundations....

  19. Task analysis and computer aid development for human reliability analysis in nuclear power plants

    Yoon, W. C.; Kim, H.; Park, H. S.; Choi, H. H.; Moon, J. M.; Heo, J. Y.; Ham, D. H.; Lee, K. K.; Han, B. T. [Korea Advanced Institute of Science and Technology, Taejeon (Korea)


    Importance of human reliability analysis (HRA) that predicts the error's occurrence possibility in a quantitative and qualitative manners is gradually increased by human errors' effects on the system's safety. HRA needs a task analysis as a virtue step, but extant task analysis techniques have the problem that a collection of information about the situation, which the human error occurs, depends entirely on HRA analyzers. The problem makes results of the task analysis inconsistent and unreliable. To complement such problem, KAERI developed the structural information analysis (SIA) that helps to analyze task's structure and situations systematically. In this study, the SIA method was evaluated by HRA experts, and a prototype computerized supporting system named CASIA (Computer Aid for SIA) was developed for the purpose of supporting to perform HRA using the SIA method. Additionally, through applying the SIA method to emergency operating procedures, we derived generic task types used in emergency and accumulated the analysis results in the database of the CASIA. The CASIA is expected to help HRA analyzers perform the analysis more easily and consistently. If more analyses will be performed and more data will be accumulated to the CASIA's database, HRA analyzers can share freely and spread smoothly his or her analysis experiences, and there by the quality of the HRA analysis will be improved. 35 refs., 38 figs., 25 tabs. (Author)

  20. A Review: Passive System Reliability Analysis – Accomplishments and Unresolved Issues



    Full Text Available Reliability assessment of passive safety systems is one of the important issues, since safety of advanced nuclear reactors rely on several passive features. In this context, a few methodologies such as Reliability Evaluation of Passive Safety System (REPAS, Reliability Methods for Passive Safety Functions (RMPS and Analysis of Passive Systems ReliAbility (APSRA have been developed in the past. These methodologies have been used to assess reliability of various passive safety systems. While these methodologies have certain features in common, but they differ in considering certain issues; for example, treatment of model uncertainties, deviation of geometric and process parameters from their nominal values, etc. This paper presents the state of the art on passive system reliability assessment methodologies, the accomplishments and remaining issues. In this review three critical issues pertaining to passive systems performance and reliability have been identified. The first issue is, applicability of best estimate codes and model uncertainty. The best estimate codes based phenomenological simulations of natural convection passive systems could have significant amount of uncertainties, these uncertainties must be incorporated in appropriate manner in the performance and reliability analysis of such systems. The second issue is the treatment of dynamic failure characteristics of components of passive systems. REPAS, RMPS and APSRA methodologies do not consider dynamic failures of components or process, which may have strong influence on the failure of passive systems. The influence of dynamic failure characteristics of components on system failure probability is presented with the help of a dynamic reliability methodology based on Monte Carlo simulation. The analysis of a benchmark problem of Hold-up tank shows the error in failure probability estimation by not considering the dynamism of components. It is thus suggested that dynamic reliability

  1. Stochastic Response and Reliability Analysis of Hysteretic Structures

    Mørk, Kim Jørgensen

    During the last 30 years response analysis of structures under random excitation has been studied in detail. These studies are motivated by the fact that most of natures excitations, such as earthquakes, wind and wave loads exhibit randomly fluctuating characters. For safety reasons this randomness...

  2. reliability analysis of a two span floor designed according to ...


    The Structural analysis and design of the timber floor system was carried out using deterministic approach ... The cell structure of hardwoods is more complex than ..... [12] BS EN -1-1: Eurocode 5: Design of Timber Structures, Part. 1-1.

  3. Reliability analysis of the control system of large-scale vertical mixing equipment


    The control system of vertical mixing equipment is a concentrate distributed monitoring system (CDMS).A reliability analysis model was built and its analysis was conducted based on reliability modeling theories such as the graph theory,Markov process,and redundancy theory.Analysis and operational results show that the control system can meet all technical requirements for high energy composite solid propellant manufacturing.The reliability performance of the control system can be considerably improved by adopting a control strategy combined with the hot spared redundancy of the primary system and the cold spared redundancy of the emergent one.The reliability performance of the control system can be also improved by adopting the redundancy strategy or improving the quality of each component and cable of the system.

  4. Structured information analysis for human reliability analysis of emergency tasks in nuclear power plants

    Jung, Won Dea; Kim, Jae Whan; Park, Jin Kyun; Ha, Jae Joo [Korea Atomic Energy Research Institute, Taejeon (Korea)


    More than twenty HRA (Human Reliability Analysis) methodologies have been developed and used for the safety analysis in nuclear field during the past two decades. However, no methodology appears to have universally been accepted, as various limitations have been raised for more widely used ones. One of the most important limitations of conventional HRA is insufficient analysis of the task structure and problem space. To resolve this problem, we suggest SIA (Structured Information Analysis) for HRA. The proposed SIA consists of three parts. The first part is the scenario analysis that investigates the contextual information related to the given task on the basis of selected scenarios. The second is the goals-means analysis to define the relations between the cognitive goal and task steps. The third is the cognitive function analysis module that identifies the cognitive patterns and information flows involved in the task. Through the three-part analysis, systematic investigation is made possible from the macroscopic information on the tasks to the microscopic information on the specific cognitive processes. It is expected that analysts can attain a structured set of information that helps to predict the types and possibility of human error in the given task. 48 refs., 12 figs., 11 tabs. (Author)

  5. Structured information analysis for human reliability analysis of emergency tasks in nuclear power plants

    Jung, Won Dea; Kim, Jae Whan; Park, Jin Kyun; Ha, Jae Joo [Korea Atomic Energy Research Institute, Taejeon (Korea)


    More than twenty HRA (Human Reliability Analysis) methodologies have been developed and used for the safety analysis in nuclear field during the past two decades. However, no methodology appears to have universally been accepted, as various limitations have been raised for more widely used ones. One of the most important limitations of conventional HRA is insufficient analysis of the task structure and problem space. To resolve this problem, we suggest SIA (Structured Information Analysis) for HRA. The proposed SIA consists of three parts. The first part is the scenario analysis that investigates the contextual information related to the given task on the basis of selected scenarios. The second is the goals-means analysis to define the relations between the cognitive goal and task steps. The third is the cognitive function analysis module that identifies the cognitive patterns and information flows involved in the task. Through the three-part analysis, systematic investigation is made possible from the macroscopic information on the tasks to the microscopic information on the specific cognitive processes. It is expected that analysts can attain a structured set of information that helps to predict the types and possibility of human error in the given task. 48 refs., 12 figs., 11 tabs. (Author)

  6. Multidisciplinary Inverse Reliability Analysis Based on Collaborative Optimization with Combination of Linear Approximations

    Xin-Jia Meng


    Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.

  7. Automated migration analysis based on cell texture: method & reliability

    Chittenden Thomas W


    Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.

  8. Sensitivity analysis for reliable design verification of nuclear turbosets

    Zentner, Irmela, E-mail: irmela.zentner@edf.f [Lamsid-Laboratory for Mechanics of Aging Industrial Structures, UMR CNRS/EDF, 1, avenue Du General de Gaulle, 92141 Clamart (France); EDF R and D-Structural Mechanics and Acoustics Department, 1, avenue Du General de Gaulle, 92141 Clamart (France); Tarantola, Stefano [Joint Research Centre of the European Commission-Institute for Protection and Security of the Citizen, T.P. 361, 21027 Ispra (Italy); Rocquigny, E. de [Ecole Centrale Paris-Applied Mathematics and Systems Department (MAS), Grande Voie des Vignes, 92 295 Chatenay-Malabry (France)


    In this paper, we present an application of sensitivity analysis for design verification of nuclear turbosets. Before the acquisition of a turbogenerator, energy power operators perform independent design assessment in order to assure safe operating conditions of the new machine in its environment. Variables of interest are related to the vibration behaviour of the machine: its eigenfrequencies and dynamic sensitivity to unbalance. In the framework of design verification, epistemic uncertainties are preponderant. This lack of knowledge is due to inexistent or imprecise information about the design as well as to interaction of the rotating machinery with supporting and sub-structures. Sensitivity analysis enables the analyst to rank sources of uncertainty with respect to their importance and, possibly, to screen out insignificant sources of uncertainty. Further studies, if necessary, can then focus on predominant parameters. In particular, the constructor can be asked for detailed information only about the most significant parameters.

  9. A Reliable Method for Rhythm Analysis during Cardiopulmonary Resuscitation

    U. Ayala


    Full Text Available Interruptions in cardiopulmonary resuscitation (CPR compromise defibrillation success. However, CPR must be interrupted to analyze the rhythm because although current methods for rhythm analysis during CPR have high sensitivity for shockable rhythms, the specificity for nonshockable rhythms is still too low. This paper introduces a new approach to rhythm analysis during CPR that combines two strategies: a state-of-the-art CPR artifact suppression filter and a shock advice algorithm (SAA designed to optimally classify the filtered signal. Emphasis is on designing an algorithm with high specificity. The SAA includes a detector for low electrical activity rhythms to increase the specificity, and a shock/no-shock decision algorithm based on a support vector machine classifier using slope and frequency features. For this study, 1185 shockable and 6482 nonshockable 9-s segments corrupted by CPR artifacts were obtained from 247 patients suffering out-of-hospital cardiac arrest. The segments were split into a training and a test set. For the test set, the sensitivity and specificity for rhythm analysis during CPR were 91.0% and 96.6%, respectively. This new approach shows an important increase in specificity without compromising the sensitivity when compared to previous studies.

  10. A Reliable Method for Rhythm Analysis during Cardiopulmonary Resuscitation

    Ayala, U.; Irusta, U.; Ruiz, J.; Eftestøl, T.; Kramer-Johansen, J.; Alonso-Atienza, F.; Alonso, E.; González-Otero, D.


    Interruptions in cardiopulmonary resuscitation (CPR) compromise defibrillation success. However, CPR must be interrupted to analyze the rhythm because although current methods for rhythm analysis during CPR have high sensitivity for shockable rhythms, the specificity for nonshockable rhythms is still too low. This paper introduces a new approach to rhythm analysis during CPR that combines two strategies: a state-of-the-art CPR artifact suppression filter and a shock advice algorithm (SAA) designed to optimally classify the filtered signal. Emphasis is on designing an algorithm with high specificity. The SAA includes a detector for low electrical activity rhythms to increase the specificity, and a shock/no-shock decision algorithm based on a support vector machine classifier using slope and frequency features. For this study, 1185 shockable and 6482 nonshockable 9-s segments corrupted by CPR artifacts were obtained from 247 patients suffering out-of-hospital cardiac arrest. The segments were split into a training and a test set. For the test set, the sensitivity and specificity for rhythm analysis during CPR were 91.0% and 96.6%, respectively. This new approach shows an important increase in specificity without compromising the sensitivity when compared to previous studies. PMID:24895621

  11. Probability maps as a measure of reliability for indivisibility analysis

    Joksić Dušan


    Full Text Available Digital terrain models (DTMs represent segments of spatial data bases related to presentation of terrain features and landforms. Square grid elevation models (DEMs have emerged as the most widely used structure during the past decade because of their simplicity and simple computer implementation. They have become an important segment of Topographic Information Systems (TIS, storing natural and artificial landscape in forms of digital models. This kind of a data structure is especially suitable for morph metric terrain evaluation and analysis, which is very important in environmental and urban planning and Earth surface modeling applications. One of the most often used functionalities of Geographical information systems software packages is indivisibility or view shed analysis of terrain. Indivisibility determination from analog topographic maps may be very exhausting, because of the large number of profiles that have to be extracted and compared. Terrain representation in form of the DEMs databases facilitates this task. This paper describes simple algorithm for terrain view shed analysis by using DEMs database structures, taking into consideration the influence of uncertainties of such data to the results obtained thus far. The concept of probability maps is introduced as a mean for evaluation of results, and is presented as thematic display.

  12. Finite State Machine Based Evaluation Model for Web Service Reliability Analysis

    M, Thirumaran; Abarna, S; P, Lakshmi


    Now-a-days they are very much considering about the changes to be done at shorter time since the reaction time needs are decreasing every moment. Business Logic Evaluation Model (BLEM) are the proposed solution targeting business logic automation and facilitating business experts to write sophisticated business rules and complex calculations without costly custom programming. BLEM is powerful enough to handle service manageability issues by analyzing and evaluating the computability and traceability and other criteria of modified business logic at run time. The web service and QOS grows expensively based on the reliability of the service. Hence the service provider of today things that reliability is the major factor and any problem in the reliability of the service should overcome then and there in order to achieve the expected level of reliability. In our paper we propose business logic evaluation model for web service reliability analysis using Finite State Machine (FSM) where FSM will be extended to analy...

  13. Reliability analysis and risk-based methods for planning of operation & maintenance of offshore wind turbines

    Sørensen, John Dalsgaard


    for extreme and fatigue limit states are presented. Operation & Maintenance planning often follows corrective and preventive strategies based on information from condition monitoring and structural health monitoring systems. A reliability- and riskbased approach is presented where a life-cycle approach......Reliability analysis and probabilistic models for wind turbines are considered with special focus on structural components and application for reliability-based calibration of partial safety factors. The main design load cases to be considered in design of wind turbine components are presented...... including the effects of the control system and possible faults due to failure of electrical / mechanical components. Considerations are presented on the target reliability level for wind turbine structural components. Application is shown for reliability-based calibrations of partial safety factors...

  14. Reliability analysis of M/G/1 queues with general retrial times and server breakdowns

    WANG Jinting


    This paper concerns the reliability issues as well as queueing analysis of M/G/1 retrial queues with general retrial times and server subject to breakdowns and repairs. We assume that the server is unreliable and customers who find the server busy or down are queued in the retrial orbit in accordance with a first-come-first-served discipline. Only the customer at the head of the orbit queue is allowed for access to the server. The necessary and sufficient condition for the system to be stable is given. Using a supplementary variable method, we obtain the Laplace-Stieltjes transform of the reliability function of the server and a steady state solution for both queueing and reliability measures of interest. Some main reliability indexes, such as the availability, failure frequency, and the reliability function of the server, are obtained.

  15. Fatigue damage reliability analysis for Nanjing Yangtze river bridge using structural health monitoring data

    HE Xu-hui; CHEN Zheng-qing; YU Zhi-wu; HUANG Fang-lin


    To evaluate the fatigue damage reliability of critical members of the Nanjing Yangtze river bridge, according to the stress-number curve and Miner's rule, the corresponding expressions for calculating the structural fatigue damage reliability were derived. Fatigue damage reliability analysis of some critical members of the Nanjing Yangtze river bridge was carried out by using the strain-time histories measured by the structural health monitoring system of the bridge. The corresponding stress spectra were obtained by the real-time rain-flow counting method.Results of fatigue damage were calculated respectively by the reliability method at different reliability and compared with Miner's rule. The results show that the fatigue damage of critical members of the Nanjing Yangtze river bridge is very small due to its low live-load stress level.

  16. Deconvolution-based resolution enhancement of chemical ice core records obtained by continuous flow analysis

    Rasmussen, Sune Olander; Andersen, Katrine K.; Johnsen, Sigfus Johann;


    Continuous flow analysis (CFA) has become a popular measuring technique for obtaining high-resolution chemical ice core records due to an attractive combination of measuring speed and resolution. However, when analyzing the deeper sections of ice cores or cores from low-accumulation areas, there ...

  17. Technique for continuous high-resolution analysis of trace substances in firn and ice cores

    Roethlisberger, R.; Bigler, M.; Hutterli, M.; Sommer, S.; Stauffer, B.; Junghans, H.G.; Wagenbach, D.


    The very successful application of a CFA (Continuous flow analysis) system in the GRIP project (Greenland Ice Core Project) for high-resolution ammonium, calcium, hydrogen peroxide, and formaldehyde measurements along a deep ice core led to further development of this analysis technique. The authors included methods for continuous analysis technique. The authors included methods for continuous analysis of sodium, nitrate, sulfate, and electrolytical conductivity, while the existing methods have been improved. The melting device has been optimized to allow the simultaneous analysis of eight components. Furthermore, a new melter was developed for analyzing firn cores. The system has been used in the frame of the European Project for Ice Coring in Antarctica (EPICA) for in-situ analysis of several firn cores from Dronning Maud Land, Antarctica, and for the new ice core drilled at Dome C, Antarctica.

  18. Strategy for Synthesis of Flexible Heat Exchanger Networks Embedded with System Reliability Analysis

    YI Dake; HAN Zhizhong; WANG Kefeng; YAO Pingjing


    System reliability can produce a strong influence on the performance of the heat exchanger network (HEN).In this paper,an optimization method with system reliability analysis for flexible HEN by genetic/simulated annealing algorithms (GA/SA) is presented.Initial flexible arrangements of HEN is received by pseudo-temperature enthalpy diagram.For determining system reliability of HEN,the connections of heat exchangers(HEXs) and independent subsystems in the HEN are analyzed by the connection sequence matrix(CSM),and the system reliability is measured by the independent subsystem including maximum number of HEXs in the HEN.As for the HEN that did not meet system reliability,HEN decoupling is applied and the independent subsystems in the HEN are changed by removing decoupling HEX,and thus the system reliability is elevated.After that,heat duty redistribution based on the relevant elements of the heat load loops and HEX areas are optimized in GA/SA.Then,the favorable network configuration,which matches both the most economical cost and system reliability criterion,is located.Moreover,particular features belonging to suitable decoupling HEX are extracted from calculations.Corresponding numerical example is presented to verify that the proposed strategy is effective to formulate optimal flexible HEN with system reliability measurement.

  19. The Revised Child Anxiety and Depression Scale: A systematic review and reliability generalization meta-analysis.

    Piqueras, Jose A; Martín-Vivar, María; Sandin, Bonifacio; San Luis, Concepción; Pineda, David


    Anxiety and depression are among the most common mental disorders during childhood and adolescence. Among the instruments for the brief screening assessment of symptoms of anxiety and depression, the Revised Child Anxiety and Depression Scale (RCADS) is one of the more widely used. Previous studies have demonstrated the reliability of the RCADS for different assessment settings and different versions. The aims of this study were to examine the mean reliability of the RCADS and the influence of the moderators on the RCADS reliability. We searched in EBSCO, PsycINFO, Google Scholar, Web of Science, and NCBI databases and other articles manually from lists of references of extracted articles. A total of 146 studies were included in our meta-analysis. The RCADS showed robust internal consistency reliability in different assessment settings, countries, and languages. We only found that reliability of the RCADS was significantly moderated by the version of RCADS. However, these differences in reliability between different versions of the RCADS were slight and can be due to the number of items. We did not examine factor structure, factorial invariance across gender, age, or country, and test-retest reliability of the RCADS. The RCADS is a reliable instrument for cross-cultural use, with the advantage of providing more information with a low number of items in the assessment of both anxiety and depression symptoms in children and adolescents. Copyright © 2017. Published by Elsevier B.V.

  20. A Reliability-Based Analysis of Bicyclist Red-Light Running Behavior at Urban Intersections

    Mei Huan


    Full Text Available This paper describes the red-light running behavior of bicyclists at urban intersections based on reliability analysis approach. Bicyclists’ crossing behavior was collected by video recording. Four proportional hazard models by the Cox, exponential, Weibull, and Gompertz distributions were proposed to analyze the covariate effects on safety crossing reliability. The influential variables include personal characteristics, movement information, and situation factors. The results indicate that the Cox hazard model gives the best description of bicyclists’ red-light running behavior. Bicyclists’ safety crossing reliabilities decrease as their waiting times increase. There are about 15.5% of bicyclists with negligible waiting times, who are at high risk of red-light running and very low safety crossing reliabilities. The proposed reliability models can capture the covariates’ effects on bicyclists’ crossing behavior at signalized intersections. Both personal characteristics and traffic conditions have significant effects on bicyclists’ safety crossing reliability. A bicyclist is more likely to have low safety crossing reliability and high violation risk when more riders are crossing against the red light, and they wait closer to the motorized lane. These findings provide valuable insights in understanding bicyclists’ violation behavior; and their implications in assessing bicyclists’ safety crossing reliability were discussed.

  1. Report on the analysis of field data relating to the reliability of solar hot water systems.

    Menicucci, David F. (Building Specialists, Inc., Albuquerque, NM)


    Utilities are overseeing the installations of thousand of solar hot water (SHW) systems. Utility planners have begun to ask for quantitative measures of the expected lifetimes of these systems so that they can properly forecast their loads. This report, which augments a 2009 reliability analysis effort by Sandia National Laboratories (SNL), addresses this need. Additional reliability data have been collected, added to the existing database, and analyzed. The results are presented. Additionally, formal reliability theory is described, including the bathtub curve, which is the most common model to characterize the lifetime reliability character of systems, and for predicting failures in the field. Reliability theory is used to assess the SNL reliability database. This assessment shows that the database is heavily weighted with data that describe the reliability of SHW systems early in their lives, during the warranty period. But it contains few measured data to describe the ends of SHW systems lives. End-of-life data are the most critical ones to define sufficiently the reliability of SHW systems in order to answer the questions that the utilities pose. Several ideas are presented for collecting the required data, including photometric analysis of aerial photographs of installed collectors, statistical and neural network analysis of energy bills from solar homes, and the development of simple algorithms to allow conventional SHW controllers to announce system failures and record the details of the event, similar to how aircraft black box recorders perform. Some information is also presented about public expectations for the longevity of a SHW system, information that is useful in developing reliability goals.

  2. Reliability and life-cycle analysis of deteriorating systems

    Sánchez-Silva, Mauricio


    This book compiles and critically discusses modern engineering system degradation models and their impact on engineering decisions. In particular, the authors focus on modeling the uncertain nature of degradation considering both conceptual discussions and formal mathematical formulations. It also describes the basics concepts and the various modeling aspects of life-cycle analysis (LCA).  It highlights the role of degradation in LCA and defines optimum design and operation parameters. Given the relationship between operational decisions and the performance of the system’s condition over time, maintenance models are also discussed. The concepts and models presented have applications in a large variety of engineering fields such as Civil, Environmental, Industrial, Electrical and Mechanical engineering. However, special emphasis is given to problems related to large infrastructure systems. The book is intended to be used both as a reference resource for researchers and practitioners and as an academic text ...

  3. Reliability Analysis of Repairable Systems Using Stochastic Point Processes

    TAN Fu-rong; JIANG Zhi-bin; BAI Tong-shuo


    In order to analyze the failure data from repairable systems, the homogeneous Poisson process(HPP) is usually used. In general, HPP cannot be applied to analyze the entire life cycle of a complex, re-pairable system because the rate of occurrence of failures (ROCOF) of the system changes over time rather thanremains stable. However, from a practical point of view, it is always preferred to apply the simplest methodto address problems and to obtain useful practical results. Therefore, we attempted to use the HPP model toanalyze the failure data from real repairable systems. A graphic method and the Laplace test were also usedin the analysis. Results of numerical applications show that the HPP model may be a useful tool for the entirelife cycle of repairable systems.

  4. Earth's core and inner-core resonances from analysis of VLBI nutation and superconducting gravimeter data

    Rosat, S.; Lambert, S. B.; Gattano, C.; Calvo, M.


    Geophysical parameters of the deep Earth's interior can be evaluated through the resonance effects associated with the core and inner-core wobbles on the forced nutations of the Earth's figure axis, as observed by very long baseline interferometry (VLBI), or on the diurnal tidal waves, retrieved from the time-varying surface gravity recorded by superconducting gravimeters (SGs). In this paper, we inverse for the rotational mode parameters from both techniques to retrieve geophysical parameters of the deep Earth. We analyse surface gravity data from 15 SG stations and VLBI delays accumulated over the last 35 yr. We show existing correlations between several basic Earth parameters and then decide to inverse for the rotational modes parameters. We employ a Bayesian inversion based on the Metropolis-Hastings algorithm with a Markov-chain Monte Carlo method. We obtain estimates of the free core nutation resonant period and quality factor that are consistent for both techniques. We also attempt an inversion for the free inner-core nutation (FICN) resonant period from gravity data. The most probable solution gives a period close to the annual prograde term (or S1 tide). However the 95 per cent confidence interval extends the possible values between roughly 28 and 725 d for gravity, and from 362 to 414 d from nutation data, depending on the prior bounds. The precisions of the estimated long-period nutation and respective small diurnal tidal constituents are hence not accurate enough for a correct determination of the FICN complex frequency.

  5. Genome-wide analysis of core promoter elements from conserved human and mouse orthologous pairs

    Jin, Victor X.; Singer, Gregory AC; Agosto-Pérez, Francisco J; Liyanarachchi, Sandya; Davuluri, Ramana V.


    Background The canonical core promoter elements consist of the TATA box, initiator (Inr), downstream core promoter element (DPE), TFIIB recognition element (BRE) and the newly-discovered motif 10 element (MTE). The motifs for these core promoter elements are highly degenerate, which tends to lead to a high false discovery rate when attempting to detect them in promoter sequences. Results In this study, we have performed the first analysis of these core promoter elements in orthologous mouse a...

  6. Core Flow Distribution from Coupled Supercritical Water Reactor Analysis

    Po Hu


    Full Text Available This paper introduces an extended code package PARCS/RELAP5 to analyze steady state of SCWR US reference design. An 8 × 8 quarter core model in PARCS and a reactor core model in RELAP5 are used to study the core flow distribution under various steady state conditions. The possibility of moderator flow reversal is found in some hot moderator channels. Different moderator flow orifice strategies, both uniform across the core and nonuniform based on the power distribution, are explored with the goal of preventing the reversal.

  7. Mechanical system reliability analysis using a combination of graph theory and Boolean function

    Tang, J


    A new method based on graph theory and Boolean function for assessing reliability of mechanical systems is proposed. The procedure for this approach consists of two parts. By using the graph theory, the formula for the reliability of a mechanical system that considers the interrelations of subsystems or components is generated. Use of the Boolean function to examine the failure interactions of two particular elements of the system, followed with demonstrations of how to incorporate such failure dependencies into the analysis of larger systems, a constructive algorithm for quantifying the genuine interconnections between the subsystems or components is provided. The combination of graph theory and Boolean function provides an effective way to evaluate the reliability of a large, complex mechanical system. A numerical example demonstrates that this method an effective approaches in system reliability analysis.

  8. Analysis and Application of Mechanical System Reliability Model Based on Copula Function

    An Hai


    Full Text Available There is complicated correlations in mechanical system. By using the advantages of copula function to solve the related issues, this paper proposes the mechanical system reliability model based on copula function. And makes a detailed research for the serial and parallel mechanical system model and gets their reliability function respectively. Finally, the application research is carried out for serial mechanical system reliability model to prove its validity by example. Using Copula theory to make mechanical system reliability modeling and its expectation, studying the distribution of the random variables (marginal distribution of the mechanical product’ life and associated structure of variables separately, can reduce the difficulty of multivariate probabilistic modeling and analysis to make the modeling and analysis process more clearly.

  9. Technology development of maintenance optimization and reliability analysis for safety features in nuclear power plants

    Kim, Tae Woon; Choi, Seong Soo; Lee, Dong Gue; Kim, Young Il


    The reliability data management system (RDMS) for safety systems of PHWR type plants has been developed and utilized in the reliability analysis of the special safety systems of Wolsong Unit 1,2 with plant overhaul period lengthened. The RDMS is developed for the periodic efficient reliability analysis of the safety systems of Wolsong Unit 1,2. In addition, this system provides the function of analyzing the effects on safety system unavailability if the test period of a test procedure changes as well as the function of optimizing the test periods of safety-related test procedures. The RDMS can be utilized in handling the requests of the regulatory institute actively with regard to the reliability validation of safety systems. (author)

  10. Methodological Approach for Performing Human Reliability and Error Analysis in Railway Transportation System

    Fabio De Felice


    Full Text Available Today, billions of dollars are being spent annually world wide to develop, manufacture, and operate transportation system such trains, ships, aircraft, and motor vehicles. Around 70 to 90 percent oftransportation crashes are, directly or indirectly, the result of human error. In fact, with the development of technology, system reliability has increased dramatically during the past decades, while human reliability has remained unchanged over the same period. Accordingly, human error is now considered as the most significant source of accidents or incidents in safety-critical systems. The aim of the paper is the proposal of a methodological approach to improve the transportation system reliability and in particular railway transportation system. The methodology presented is based on Failure Modes, Effects and Criticality Analysis (FMECA and Human Reliability Analysis (HRA.

  11. Tensile reliability analysis for gravity dam foundation surface based on FEM and response surface method

    Tong-chun LI; Dan-dan LI; Zhi-qiang WANG


    In this study,the limit state equation for tensile reliability analysis of the foundation surface of a gravity dam was established.The possible crack length was set as the action effect and allowable crack length was set as the resistance in the limit state.The nonlinear FEM was used to obtain the crack length of the foundation surface of the gravity dam,and the linear response surface method based on the orthogonal test design method was used to calculate the reliability,providing a reasonable and simple method for calculating the reliability of the serviceability limit state.The Longtan RCC gravity dam was chosen as an example.An orthogonal test,including eleven factors and two levels,was conducted,and the tensile reliability was calculated.The analysis shows that this method is reasonable.

  12. Analysis of whisker-toughened CMC structural components using an interactive reliability model

    Duffy, Stephen F.; Palko, Joseph L.


    Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.

  13. Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method

    XI Jia-mi; YANG Geng-she


    Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.

  14. Latency Analysis of Systems with Multiple Interfaces for Ultra-Reliable M2M Communication

    Nielsen, Jimmy Jessen; Popovski, Petar


    One of the ways to satisfy the requirements of ultra-reliable low latency communication for mission critical Machine-type Communications (MTC) applications is to integrate multiple communication interfaces. In order to estimate the performance in terms of latency and reliability...... of such an integrated communication system, we propose an analysis framework that combines traditional reliability models with technology-specific latency probability distributions. In our proposed model we demonstrate how failure correlation between technologies can be taken into account. We show for the considered...

  15. Reliability and error analysis on xenon/CT CBF

    Zhang, Z. [Diversified Diagnostic Products, Inc., Houston, TX (United States)


    This article provides a quantitative error analysis of a simulation model of xenon/CT CBF in order to investigate the behavior and effect of different types of errors such as CT noise, motion artifacts, lower percentage of xenon supply, lower tissue enhancements, etc. A mathematical model is built to simulate these errors. By adjusting the initial parameters of the simulation model, we can scale the Gaussian noise, control the percentage of xenon supply, and change the tissue enhancement with different kVp settings. The motion artifact will be treated separately by geometrically shifting the sequential CT images. The input function is chosen from an end-tidal xenon curve of a practical study. Four kinds of cerebral blood flow, 10, 20, 50, and 80 cc/100 g/min, are examined under different error environments and the corresponding CT images are generated following the currently popular timing protocol. The simulated studies will be fed to a regular xenon/CT CBF system for calculation and evaluation. A quantitative comparison is given to reveal the behavior and effect of individual error resources. Mixed error testing is also provided to inspect the combination effect of errors. The experiment shows that CT noise is still a major error resource. The motion artifact affects the CBF results more geometrically than quantitatively. Lower xenon supply has a lesser effect on the results, but will reduce the signal/noise ratio. The lower xenon enhancement will lower the flow values in all areas of brain. (author)

  16. Reliability of Foundation Pile Based on Settlement and a Parameter Sensitivity Analysis

    Shujun Zhang; Luo Zhong; Zhijun Xu


    Based on the uncertainty analysis to calculation model of settlement, the formula of reliability index of foundation pile is derived. Based on this formula, the influence of coefficient of variation of the calculated settlement at pile head, coefficient of variation of the permissible limit of the settlement, coefficient of variation of the measured settlement, safety coefficient, and the mean value of calculation model coefficient on reliability is analyzed. The results indicate that (1) hig...

  17. Investigation of Common Symptoms of Cancer and Reliability Analysis


    Objective: To identify cancer distribution and treatment requirements, a questionnaire on cancer patients was conducted. It was our objective to validate a series of symptoms commonly used in traditional Chinese medicine (TCM). Methods: The M. D. Anderson Symptom Assessment Inventory (MDASI) was used with 10 more TCM items added. Questions regarding TCM application requested in cancer care were also asked. A multi-center, cross-sectional study was conducted in 340 patients from 4 hospitals in Beijing and Dalian. SPSS and Excel software were adopted for statistical analysis. The questionnaire was self-evaluated with the Cronbach's alpha score. Results: The most common symptoms were fatigue 89.4%, sleep disturbance 74.4%, dry mouth 72.9%, poor appetite 72.9%, and difficulty remembering 71.2%. These symptoms affected work (89.8%), mood (82.6%),and activity (76.8%), resulting in poor quality of life. Eighty percent of the patients wanted to regulate the body with TCM. Almost 100% of the patients were interested in acquiring knowledge regarding the integrated traditional Chinese medicine (TCM) and Western medicine (WM) in the treatment and rehabilitation of cancer. Cronbach's alpha score indicated that there was acceptable internal consistency within both the MDASI and TCM items, 0.86 for MDASI, 0.78 for TCM, and 0.90 for MDASI-TCM (23 items). Conclusions: Fatigue, sleep disturbance, dry mouth, poor appetite, and difficulty remembering are the most common symptoms in cancer patients. These greatly affect the quality of life for these patients. Patients expressed a strong desire for TCM holistic regulation. The MDASI and its TCM-adapted model could be a critical tool for the quantitative study of TCM symptoms.

  18. Correlation Relationship of Performance Shaping Factors (PSFs) for Human Reliability Analysis

    Bheka, M. Khumalo; Kim, Jonghyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)


    At TMI-2, operators permitted thousands of gallons of water to escape from the reactor plant before realizing that the coolant pumps were behaving abnormally. The coolant pumps were then turned off, which in turn led to the destruction of the reactor itself as cooling was completely lost within the core. Human also plays a role in many aspects of complex systems e.g. in design and manufacture of hardware, interface between human and system and also in maintaining such systems as well as for coping with unusual events that place the NPP system at a risk. This is why human reliability analysis (HRA) - an aspect of risk assessments which systematically identifies and analyzes the causes and consequences of human decisions and actions - is important in nuclear power plant operations. It either upgrades or degrades human performance; therefore it has an impact on the possibility of error. These PSFs can be used in various HRA methods to estimate Human Error Probabilities (HEPs). There are many current HRA methods who propose sets of PSFs for normal operation mode of NPP. Some of these PSFs in the sets have some degree of dependency and overlap. Overlapping PSFs introduce error in HEP evaluations due to the fact that some elements are counted more than once in data; this skews the relationship amongst PSF and masks the way that the elements interact to affect performance. This study uses a causal model that represents dependencies and relationships amongst PSFs for HEP evaluation during normal NPP operational states. The model is built taking into consideration the dependencies among PSFs and thus eliminating overlap. The use of an interdependent model of PSFs is expected to produce more accurate HEPs compared to other current methods. PSF sets produced in this study can be further used as nodes (variables) and directed arcs (causal influence between nodes) in HEP evaluation methods such as Bayesian belief (BN) networks. This study was done to estimate the relationships

  19. Reliability and Validity of Quantitative Video Analysis of Baseball Pitching Motion.

    Oyama, Sakiko; Sosa, Araceli; Campbell, Rebekah; Correa, Alexandra


    Video recordings are used to quantitatively analyze pitchers' techniques. However, reliability and validity of such analysis is unknown. The purpose of the study was to investigate the reliability and validity of joint and segment angles identified during a pitching motion using video analysis. Thirty high school baseball pitchers participated. The pitching motion was captured using 2 high-speed video cameras and a motion capture system. Two raters reviewed the videos to digitize the body segments to calculate 2-dimensional angles. The corresponding 3-dimensional angles were calculated from the motion capture data. Intrarater reliability, interrater reliability, and validity of the 2-dimensional angles were determined. The intrarater and interrater reliability of the 2-dimensional angles were high for most variables. The trunk contralateral flexion at maximum external rotation was the only variable with high validity. Trunk contralateral flexion at ball release, trunk forward flexion at foot contact and ball release, shoulder elevation angle at foot contact, and maximum shoulder external rotation had moderate validity. Two-dimensional angles at the shoulder, elbow, and trunk could be measured with high reliability. However, the angles are not necessarily anatomically correct, and thus use of quantitative video analysis should be limited to angles that can be measured with good validity.

  20. Stochastic data-flow graph models for the reliability analysis of communication networks and computer systems

    Chen, D.J.


    The literature is abundant with combinatorial reliability analysis of communication networks and fault-tolerant computer systems. However, it is very difficult to formulate reliability indexes using combinatorial methods. These limitations have led to the development of time-dependent reliability analysis using stochastic processes. In this research, time-dependent reliability-analysis techniques using Dataflow Graphs (DGF) are developed. The chief advantages of DFG models over other models are their compactness, structural correspondence with the systems, and general amenability to direct interpretation. This makes the verification of the correspondence of the data-flow graph representation to the actual system possible. Several DGF models are developed and used to analyze the reliability of communication networks and computer systems. Specifically, Stochastic Dataflow graphs (SDFG), both the discrete-time and the continuous time models are developed and used to compute time-dependent reliability of communication networks and computer systems. The repair and coverage phenomenon of communication networks is also analyzed using SDFG models.

  1. Multiobject Reliability Analysis of Turbine Blisk with Multidiscipline under Multiphysical Field Interaction

    Chun-Yi Zhang


    Full Text Available To study accurately the influence of the deformation, stress, and strain of turbine blisk on the performance of aeroengine, the comprehensive reliability analysis of turbine blisk with multiple disciplines and multiple objects was performed based on multiple response surface method (MRSM and fluid-thermal-solid coupling technique. Firstly, the basic thought of MRSM was introduced. And then the mathematical model of MRSM was established with quadratic polynomial. Finally, the multiple reliability analyses of deformation, stress, and strain of turbine blisk were completed under multiphysical field coupling by the MRSM, and the comprehensive performance of turbine blisk was evaluated. From the reliability analysis, it is demonstrated that the reliability degrees of the deformation, stress, and strain for turbine blisk are 0.9942, 0.9935, 0.9954, and 0.9919, respectively, when the allowable deformation, stress, and strain are 3.7 × 10−3 m, 1.07 × 109 Pa, and 1.12 × 10−2 m/m, respectively; besides, the comprehensive reliability degree of turbine blisk is 0.9919, which basically satisfies the engineering requirement of aeroengine. The efforts of this paper provide a promising approach method for multidiscipline multiobject reliability analysis.

  2. Segmental analysis of indocyanine green pharmacokinetics for the reliable diagnosis of functional vascular insufficiency

    Kang, Yujung; Lee, Jungsul; An, Yuri; Jeon, Jongwook; Choi, Chulhee


    Accurate and reliable diagnosis of functional insufficiency of peripheral vasculature is essential since Raynaud phenomenon (RP), most common form of peripheral vascular insufficiency, is commonly associated with systemic vascular disorders. We have previously demonstrated that dynamic imaging of near-infrared fluorophore indocyanine green (ICG) can be a noninvasive and sensitive tool to measure tissue perfusion. In the present study, we demonstrated that combined analysis of multiple parameters, especially onset time and modified Tmax which means the time from onset of ICG fluorescence to Tmax, can be used as a reliable diagnostic tool for RP. To validate the method, we performed the conventional thermographic analysis combined with cold challenge and rewarming along with ICG dynamic imaging and segmental analysis. A case-control analysis demonstrated that segmental pattern of ICG dynamics in both hands was significantly different between normal and RP case, suggesting the possibility of clinical application of this novel method for the convenient and reliable diagnosis of RP.

  3. Analysis of strain gage reliability in F-100 jet engine testing at NASA Lewis Research Center

    Holanda, R.


    A reliability analysis was performed on 64 strain gage systems mounted on the 3 rotor stages of the fan of a YF-100 engine. The strain gages were used in a 65 hour fan flutter research program which included about 5 hours of blade flutter. The analysis was part of a reliability improvement program. Eighty-four percent of the strain gages survived the test and performed satisfactorily. A post test analysis determined most failure causes. Five failures were caused by open circuits, three failed gages showed elevated circuit resistance, and one gage circuit was grounded. One failure was undetermined.

  4. Problems Related to Use of Some Terms in System Reliability Analysis

    Nadezda Hanusova


    Full Text Available The paper deals with problems of using dependability terms, defined in actual standard STN IEC 50 (191: International electrotechnical dictionary, chap. 191: Dependability and quality of service (1993, in a technical systems dependability analysis. The goal of the paper is to find a relation between terms introduced in the mentioned standard and used in the technical systems dependability analysis and rules and practices used in a system analysis of the system theory. Description of a part of the system life cycle related to reliability is used as a starting point. The part of a system life cycle is described by the state diagram and reliability relevant therms are assigned.

  5. Buckling and dynamic analysis of drill strings for core sampling

    Ziada, H.H., Westinghouse Hanford


    This supporting document presents buckling and dynamic stability analyses of the drill strings used for core sampling. The results of the drill string analyses provide limiting operating axial loads and rotational speeds to prevent drill string failure, instability and drill bit overheating during core sampling. The recommended loads and speeds provide controls necessary for Tank Waste Remediation System (TWRS) programmatic field operations.

  6. Development and analysis of U-core switched reluctance machine

    Jæger, Rasmus; Nielsen, Simon Staal; Rasmussen, Peter Omand


    these disadvantages have been presented, but not all of them have been demonstrated practically. This paper presents a practical demonstration and assessment of a segmented U-core SRM, which copes with some of the disadvantages of the regular SRM. The U-core SRM has a segmented stator, with a short flux path...

  7. Study of core support barrel vibration monitoring using ex-core neutron noise analysis and fuzzy logic algorithm

    Christian, Robby; Song, Seon Ho [Nuclear and Quantum Engineering Department, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Kang, Hyun Gook [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)


    The application of neutron noise analysis (NNA) to the ex-core neutron detector signal for monitoring the vibration characteristics of a reactor core support barrel (CSB) was investigated. Ex-core flux data were generated by using a nonanalog Monte Carlo neutron transport method in a simulated CSB model where the implicit capture and Russian roulette technique were utilized. First and third order beam and shell modes of CSB vibration were modeled based on parallel processing simulation. A NNA module was developed to analyze the ex-core flux data based on its time variation, normalized power spectral density, normalized cross-power spectral density, coherence, and phase differences. The data were then analyzed with a fuzzy logic module to determine the vibration characteristics. The ex-core neutron signal fluctuation was directly proportional to the CSB's vibration observed at 8Hz and15Hzin the beam mode vibration, and at 8Hz in the shell mode vibration. The coherence result between flux pairs was unity at the vibration peak frequencies. A distinct pattern of phase differences was observed for each of the vibration models. The developed fuzzy logic module demonstrated successful recognition of the vibration frequencies, modes, orders, directions, and phase differences within 0.4 ms for the beam and shell mode vibrations.

  8. Content Analysis in Mass Communication: Assessment and Reporting of Intercoder Reliability.

    Lombard, Matthew; Snyder-Duch, Jennifer; Bracken, Cheryl Campanella


    Reviews the importance of intercoder agreement for content analysis in mass communication research. Describes several indices for calculating this type of reliability (varying in appropriateness, complexity, and apparent prevalence of use). Presents a content analysis of content analyses reported in communication journals to establish how…

  9. Homogeneous protein analysis by magnetic core-shell nanorod probes

    Schrittwieser, Stefan


    Studying protein interactions is of vital importance both to fundamental biology research and to medical applications. Here, we report on the experimental proof of a universally applicable label-free homogeneous platform for rapid protein analysis. It is based on optically detecting changes in the rotational dynamics of magnetically agitated core-shell nanorods upon their specific interaction with proteins. By adjusting the excitation frequency, we are able to optimize the measurement signal for each analyte protein size. In addition, due to the locking of the optical signal to the magnetic excitation frequency, background signals are suppressed, thus allowing exclusive studies of processes at the nanoprobe surface only. We study target proteins (soluble domain of the human epidermal growth factor receptor 2 - sHER2) specifically binding to antibodies (trastuzumab) immobilized on the surface of our nanoprobes and demonstrate direct deduction of their respective sizes. Additionally, we examine the dependence of our measurement signal on the concentration of the analyte protein, and deduce a minimally detectable sHER2 concentration of 440 pM. For our homogeneous measurement platform, good dispersion stability of the applied nanoprobes under physiological conditions is of vital importance. To that end, we support our measurement data by theoretical modeling of the total particle-particle interaction energies. The successful implementation of our platform offers scope for applications in biomarker-based diagnostics as well as for answering basic biology questions.

  10. Dynamic Scapular Movement Analysis: Is It Feasible and Reliable in Stroke Patients during Arm Elevation?

    De Baets, Liesbet; Van Deun, Sara; Desloovere, Kaat; Jaspers, Ellen


    Knowledge of three-dimensional scapular movements is essential to understand post-stroke shoulder pain. The goal of the present work is to determine the feasibility and the within and between session reliability of a movement protocol for three-dimensional scapular movement analysis in stroke patients with mild to moderate impairment, using an optoelectronic measurement system. Scapular kinematics of 10 stroke patients and 10 healthy controls was recorded on two occasions during active anteflexion and abduction from 0° to 60° and from 0° to 120°. All tasks were executed unilaterally and bilaterally. The protocol’s feasibility was first assessed, followed by within and between session reliability of scapular total range of motion (ROM), joint angles at start position and of angular waveforms. Additionally, measurement errors were calculated for all parameters. Results indicated that the protocol was generally feasible for this group of patients and assessors. Within session reliability was very good for all tasks. Between sessions, scapular angles at start position were measured reliably for most tasks, while scapular ROM was more reliable during the 120° tasks. In general, scapular angles showed higher reliability during anteflexion compared to abduction, especially for protraction. Scapular lateral rotations resulted in smallest measurement errors. This study indicates that scapular kinematics can be measured reliably and with precision within one measurement session. In case of multiple test sessions, further methodological optimization is required for this protocol to be suitable for clinical decision-making and evaluation of treatment efficacy. PMID:24244414

  11. Dynamic scapular movement analysis: is it feasible and reliable in stroke patients during arm elevation?

    Liesbet De Baets

    Full Text Available Knowledge of three-dimensional scapular movements is essential to understand post-stroke shoulder pain. The goal of the present work is to determine the feasibility and the within and between session reliability of a movement protocol for three-dimensional scapular movement analysis in stroke patients with mild to moderate impairment, using an optoelectronic measurement system. Scapular kinematics of 10 stroke patients and 10 healthy controls was recorded on two occasions during active anteflexion and abduction from 0° to 60° and from 0° to 120°. All tasks were executed unilaterally and bilaterally. The protocol's feasibility was first assessed, followed by within and between session reliability of scapular total range of motion (ROM, joint angles at start position and of angular waveforms. Additionally, measurement errors were calculated for all parameters. Results indicated that the protocol was generally feasible for this group of patients and assessors. Within session reliability was very good for all tasks. Between sessions, scapular angles at start position were measured reliably for most tasks, while scapular ROM was more reliable during the 120° tasks. In general, scapular angles showed higher reliability during anteflexion compared to abduction, especially for protraction. Scapular lateral rotations resulted in smallest measurement errors. This study indicates that scapular kinematics can be measured reliably and with precision within one measurement session. In case of multiple test sessions, further methodological optimization is required for this protocol to be suitable for clinical decision-making and evaluation of treatment efficacy.

  12. Interrater Reliability and Concurrent Validity of a New Rating Scale to Assess the Performance of Everyday Life Tasks in Dementia: The Core Elements Method.

    de Werd, Maartje M E; Hoelzenbein, Angela C; Boelen, Daniëlle H E; Rikkert, Marcel G M Olde; Hüell, Michael; Kessels, Roy P C; Voigt-Radloff, Sebastian


    Errorless learning (EL) is an instructional procedure involving error reduction during learning. Errorless learning is mostly examined by counting correctly executed task steps or by rating them using a Task Performance Scale (TPS). Here, we explore the validity and reliability of a new assessment procedure, the core elements method (CEM), which rates essential building blocks of activities rather than individual steps. Task performance was assessed in 35 patients with Alzheimer's dementia recruited from the Relearning methods on Daily Living task performance of persons with Dementia (REDALI-DEM) study using TPS and CEM independently. Results showed excellent interrater reliabilities for both measure methods (CEM: intraclass coefficient [ICC] = .85; TPS: ICC = .97). Also, both methods showed a high agreement (CEM: mean of measurement difference [MD] = -3.44, standard deviation [SD] = 14.72; TPS: MD = -0.41, SD = 7.89) and correlated highly (>.75). Based on these results, TPS and CEM are both valid for assessing task performance. However, since TPS is more complicated and time consuming, CEM may be the preferred method for future research projects.




    On December 15-16, 2009, a 100-KE Reactor Core Removal Project Alternative Analysis Workshop was conducted at the Washington State University Consolidated Information Center, Room 214. Colburn Kennedy, Project Director, CH2M HILL Plateau Remediation Company (CHPRC) requested the workshop and Richard Harrington provided facilitation. The purpose of the session was to select the preferred Bio Shield Alternative, for integration with the Thermal Shield and Core Removal and develop the path forward to proceed with project delivery. Prior to this workshop, the S.A. Robotics (SAR) Obstruction Removal Alternatives Analysis (565-DLV-062) report was issued, for use prior to and throughout the session, to all the team members. The multidisciplinary team consisted ofrepresentatives from 100-KE Project Management, Engineering, Radcon, Nuclear Safety, Fire Protection, Crane/Rigging, SAR Project Engineering, the Department of Energy Richland Field Office, Environmental Protection Agency, Washington State Department of Ecology, Defense Nuclear Facility Safety Board, and Deactivation and Decommission subject matter experts from corporate CH2M HILL and Lucas. Appendix D contains the workshop agenda, guidelines and expectations, opening remarks, and attendance roster going into followed throughout the workshop. The team was successful in selecting the preferred alternative and developing an eight-point path forward action plan to proceed with conceptual design. Conventional Demolition was selected as the preferred alternative over two other alternatives: Diamond Wire with Options, and Harmonic Delamination with Conventional Demolition. The teams preferred alternative aligned with the SAR Obstruction Removal Alternative Analysis report conclusion. However, the team identified several Path Forward actions, in Appendix A, which upon completion will solidify and potentially enhance the Conventional Demolition alternative with multiple options and approaches to achieve project delivery

  14. Estimating Reliability of Disturbances in Satellite Time Series Data Based on Statistical Analysis

    Zhou, Z.-G.; Tang, P.; Zhou, M.


    Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with "Change/ No change" by most of the present methods, while few methods focus on estimating reliability (or confidence level) of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1) Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST). (2) Forecasting and detecting disturbances in new time series data. (3) Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI) and Confidence Levels (CL). The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  15. Reliability reallocation models as a support tools in traffic safety analysis.

    Bačkalić, Svetlana; Jovanović, Dragan; Bačkalić, Todor


    One of the essential questions placed before a road authority is where to act first, i.e. which road sections should be treated in order to achieve the desired level of reliability of a particular road, while this is at the same time the subject of this research. The paper shows how the reliability reallocation theory can be applied in safety analysis of a road consisting of sections. The model has been successfully tested using two apportionment techniques - ARINC and the minimum effort algorithm. The given methods were applied in the traffic safety analysis as a basic step, for the purpose of achieving a higher level of reliability. The previous methods used for selecting hazardous locations do not provide precise values for the required frequency of accidents, i.e. the time period between the occurrences of two accidents. In other words, they do not allow for the establishment of a connection between a precise demand for increased reliability (expressed as a percentage) and the selection of particular road sections for further analysis. The paper shows that reallocation models can also be applied in road safety analysis, or more precisely, as part of the measures for increasing their level of safety. A tool has been developed for selecting road sections for treatment on the basis of a precisely defined increase in the level of reliability of a particular road, i.e. the mean time between the occurrences of two accidents.

  16. Reliability analysis of supporting pressure in tunnels based on three-dimensional failure mechanism

    罗卫华; 李闻韬


    Based on nonlinear failure criterion, a three-dimensional failure mechanism of the possible collapse of deep tunnel is presented with limit analysis theory. Support pressure is taken into consideration in the virtual work equation performed under the upper bound theorem. It is necessary to point out that the properties of surrounding rock mass plays a vital role in the shape of collapsing rock mass. The first order reliability method and Monte Carlo simulation method are then employed to analyze the stability of presented mechanism. Different rock parameters are considered random variables to value the corresponding reliability index with an increasing applied support pressure. The reliability indexes calculated by two methods are in good agreement. Sensitivity analysis was performed and the influence of coefficient variation of rock parameters was discussed. It is shown that the tensile strength plays a much more important role in reliability index than dimensionless parameter, and that small changes occurring in the coefficient of variation would make great influence of reliability index. Thus, significant attention should be paid to the properties of surrounding rock mass and the applied support pressure to maintain the stability of tunnel can be determined for a given reliability index.

  17. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    Zio, Enrico


    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...




    A two-point adaptive nonlinear approximation (referred to as TANA4) suitable for reliability analysis is proposed. Transformed and normalized random variables in probabilistic analysis could become negative and pose a challenge to the earlier developed two-point approximations; thus a suitable method that can address this issue is needed. In the method proposed, the nonlinearity indices of intervening variables are limited to integers. Then, on the basis of the present method, an improved sequential approximation of the limit state surface for reliability analysis is presented. With the gradient projection method, the data points for the limit state surface approximation are selected on the original limit state surface, which effectively represents the nature of the original response function. On the basis of this new approximation, the reliability is estimated using a first-order second-moment method. Various examples, including both structural and non-structural ones, are presented to show the effectiveness of the method proposed.

  19. An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.

    Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes


    This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice.

  20. Asymptotic Sampling for Reliability Analysis of Adhesive Bonded Stepped Lap Composite Joints

    Kimiaeifar, Amin; Lund, Erik; Thomsen, Ole Thybo


    Reliability analysis coupled with finite element analysis (FEA) of composite structures is computationally very demanding and requires a large number of simulations to achieve an accurate prediction of the probability of failure with a small standard error. In this paper Asymptotic Sampling, which....... Three dimensional (3D) FEA is used for the structural analysis together with a design equation that is associated with a deterministic code-based design equation where reliability is secured by partial safety factors. The Tsai-Wu and the maximum principal stress failure criteria are used to predict...... failure in the composite and adhesive layers, respectively, and the results are compared with the target reliability level implicitly used in the wind turbine standard IEC 61400-1. The accuracy and efficiency of Asymptotic Sampling is investigated by comparing the results with predictions obtained using...

  1. Structure buckling and non-probabilistic reliability analysis of supercavitating vehicles

    AN Wei-guang; ZHOU Ling; AN Hai


    To perform structure buckling and reliability analysis on supercavitating vehicles with high velocity in the submarine, supercavitating vehicles were simplified as variable cross section beam firstly. Then structural buckling analysis of supercavitating vehicles with or without engine thrust was conducted, and the structural buckling safety margin equation of supercavitating vehicles was established. The indefinite information was de-scribed by interval set and the structure reliability analysis was performed by using non-probabilistic reliability method. Considering interval variables as random variables which satisfy uniform distribution, the Monte-Carlo method was used to calculate the non-probabilistic failure degree. Numerical examples of supercavitating vehi-cles were presented. Under different ratios of base diameter to cavitator diameter, the change tendency of non-probabilistic failure degree of structural buckling of supereavitating vehicles with or without engine thrust was studied along with the variety of speed.

  2. The application of emulation techniques in the analysis of highly reliable, guidance and control computer systems

    Migneault, Gerard E.


    Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.

  3. Reliability analysis of shoulder balance measures: comparison of the 4 available methods.

    Hong, Jae-Young; Suh, Seung-Woo; Yang, Jae-Hyuk; Park, Si-Young; Han, Ji-Hoon


    Observational study with 3 examiners. To compare the reliability of shoulder balance measurement methods. There are several measurement methods for shoulder balance. No reliability analysis has been performed despite the clinical importance of this measurement. Whole spine posteroanterior radiographs (n = 270) were collected to compare the reliability of the 4 shoulder balance measures in patients with adolescent idiopathic scoliosis. Each radiograph was measured twice by each of the 3 examiners using 4 measurement methods. The data were analyzed statistically to determine the inter- and intraobserver reliability. Overall, the 4 radiographical methods showed an excellent intraclass correlation coefficient regardless of severity in intraobserver comparisons (>0.904). In addition, the mean absolute difference values in all methods were low and were comparatively similar (0.445, mean absolute difference 0.810 and >0.787, respectively) regardless of severity. In addition, the mean absolute difference values in the clavicular angle method were lower (balance measurement method clinically. 3.

  4. Health-related quality of life in young adult patients with rheumatoid arthritis in Iran: reliability and validity of the Persian translation of the PedsQL™ 4.0 Generic Core Scales Young Adult Version.

    Pakpour, Amir H; Zeidi, Isa Mohammadi; Hashemi, Fariba; Saffari, Mohsen; Burri, Andrea


    The objective of the present study was to determine the reliability and validity of the Persian translation of the Pediatric Quality of Life Inventory (PedsQL™) 4.0 Generic Core Scales Young Adult Version in an Iranian sample of young adult patients with rheumatoid arthritis (RA). One hundred ninety-seven young adult patients with RA completed the 23-item PedsQL™ and the 36-item Short-Form Health Survey (SF-36). Disease activity based on Disease Activity Score 28 was also measured. Internal consistency and test-retest reliability, as well as construct, discriminant, and convergent validity, were tested. Confirmatory factor analysis (CFA) was used to verify the original factor structure of the PedsQL™. Also, responsiveness to change in PedsQL™ scores over time was assessed. Cronbach's alpha coefficients ranged from α = 0.82 to α = 0.91. Test-retest reproducibility was satisfactory for all scales and the total scale score. The PedsQL proved good convergent validity with the SF-36. The PedsQL distinguished well between young adult patients and healthy young adults and also RA groups with different comorbidities. The CFA did not confirm the original four-factor model, instead, analyses revealed a best-fitting five-factor model for the PedsQL™ Young Adult Version. Repeated measures analysis of variance indicated that the PedsQL scale scores for young adults increased significantly over time. The Persian translation of the PedsQL™ 4.0 Generic Core Scales Young Adult Version demonstrated good psychometric properties in young adult patients with RA and can be recommended for the use in RA research in Iran.

  5. SUPERENERGY-2: a multiassembly, steady-state computer code for LMFBR core thermal-hydraulic analysis

    Basehore, K.L.; Todreas, N.E.


    Core thermal-hydraulic design and performance analyses for Liquid Metal Fast Breeder Reactors (LMFBRs) require repeated detailed multiassembly calculations to determine radial temperature profiles and subchannel outlet temperatures for various core configurations and subassembly structural analyses. At steady-state, detailed core-wide temperature profiles are required for core restraint calculations and subassembly structural analysis. In addition, sodium outlet temperatures are routinely needed for each reactor operating cycle. The SUPERENERGY-2 thermal-hydraulic code was designed specifically to meet these designer needs. It is applicable only to steady-state, forced-convection flow in LMFBR core geometries.

  6. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.


    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  7. Reliability analysis of production ships with emphasis on load combination and ultimate strength

    Wang, Xiaozhi


    This thesis deals with ultimate strength and reliability analysis of offshore production ships, accounting for stochastic load combinations, using a typical North Sea production ship for reference. A review of methods for structural reliability analysis is presented. Probabilistic methods are established for the still water and vertical wave bending moments. Linear stress analysis of a midships transverse frame is carried out, four different finite element models are assessed. Upon verification of the general finite element code ABAQUS with a typical ship transverse girder example, for which test results are available, ultimate strength analysis of the reference transverse frame is made to obtain the ultimate load factors associated with the specified pressure loads in Det norske Veritas Classification rules for ships and rules for production vessels. Reliability analysis is performed to develop appropriate design criteria for the transverse structure. It is found that the transverse frame failure mode does not seem to contribute to the system collapse. Ultimate strength analysis of the longitudinally stiffened panels is performed, accounting for the combined biaxial and lateral loading. Reliability based design of the longitudinally stiffened bottom and deck panels is accomplished regarding the collapse mode under combined biaxial and lateral loads. 107 refs., 76 refs., 37 tabs.

  8. Knowledge Economy Core Journals: Identification through LISTA Database Analysis.

    Nouri, Rasool; Karimi, Saeed; Ashrafi-rizi, Hassan; Nouri, Azadeh


    Knowledge economy has become increasingly broad over the years and identification of core journals in this field can be useful for librarians in journal selection process and also for researchers to select their studies and finding Appropriate Journal for publishing their articles. Present research attempts to determine core journals of Knowledge Economy indexed in LISTA (Library and Information Science and Technology). The research method was bibliometric and research population include the journals indexed in LISTA (From the start until the beginning of 2011) with at least one article a bout "knowledge economy". For data collection, keywords about "knowledge economy"-were extracted from the literature in this area-have searched in LISTA by using title, keyword and abstract fields and also taking advantage of LISTA thesaurus. By using this search strategy, 1608 articles from 390 journals were retrieved. The retrieved records import in to the excel sheet and after that the journals were grouped and the Bradford's coefficient was measured for each group. Finally the average of the Bradford's coefficients were calculated and core journals with subject area of "Knowledge economy" were determined by using Bradford's formula. By using Bradford's scattering law, 15 journals with the highest publication rates were identified as "Knowledge economy" core journals indexed in LISTA. In this list "Library and Information update" with 64 articles was at the top. "ASLIB Proceedings" and "Serials" with 51 and 40 articles are next in rank. Also 41 journals were identified as beyond core that "Library Hi Tech" with 20 articles was at the top. Increased importance of knowledge economy has led to growth of production of articles in this subject area. So the evaluation of journals for ranking these journals becomes a very challenging task for librarians and generating core journal list can provide a useful tool for journal selection and also quick and easy access to information. Core

  9. Knowledge Economy Core Journals: Identification through LISTA Database Analysis

    Nouri, Rasool; Karimi, Saeed; Ashrafi-rizi, Hassan; Nouri, Azadeh


    Background Knowledge economy has become increasingly broad over the years and identification of core journals in this field can be useful for librarians in journal selection process and also for researchers to select their studies and finding Appropriate Journal for publishing their articles. Present research attempts to determine core journals of Knowledge Economy indexed in LISTA (Library and Information Science and Technology). Methods The research method was bibliometric and research popu...

  10. Assessing the Reliability of Digitalized Cephalometric Analysis in Comparison with Manual Cephalometric Analysis

    Farooq, Mohammed Umar; Khan, Mohd. Asadullah; Imran, Shahid; Qureshi, Arshad; Ahmed, Syed Afroz; Kumar, Sujan; Rahman, Mohd. Aziz Ur


    Introduction For more than seven decades orthodontist used cephalometric analysis as one of the main diagnostic tools which can be performed manually or by software. The use of computers in treatment planning is expected to avoid errors and make it less time consuming with effective evaluation and high reproducibility. Aim This study was done to evaluate and compare the accuracy and reliability of cephalometric measurements between computerized method of direct digital radiographs and conventional tracing. Materials and Methods Digital and conventional hand tracing cephalometric analysis of 50 patients were done. Thirty anatomical landmarks were defined on each radiograph by a single investi-gator, 5 skeletal analysis (Steiner, Wits, Tweeds, McNamara, Rakosi Jarabaks) and 28 variables were calculated. Results The variables showed consistency between the two methods except for 1-NA, Y-axis and interincisal angle measurements which were higher in manual tracing and higher facial axis angle in digital tracing. Conclusion Most of the commonly used measurements were accurate except some measurements between the digital tracing with FACAD® and manual methods. The advantages of digital imaging such as enhancement, transmission, archiving and low radiation dosages makes it to be preferred over conventional method in daily use. PMID:27891451

  11. Systems Analysis Programs for Hands-on Intergrated Reliability Evaluations (SAPHIRE) Summary Manual

    C. L. Smith


    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). INL's primary role in this project is that of software developer and tester. However, INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users, who constitute a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events and quantify associated consequential outcome frequencies. Specifically, for nuclear power plant applications, SAPHIRE can identify important contributors to core damage (Level 1 PRA) and containment failure during a severe accident which lead to releases (Level 2 PRA). It can be used for a PRA where the reactor is at full power, low power, or at shutdown conditions. Furthermore, it can be used to analyze both internal and external initiating events and has special features for transforming an internal events model to a model for external events, such as flooding and fire analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to the public and environment (Level 3 PRA). SAPHIRE also includes a separate module called the Graphical Evaluation Module (GEM). GEM is a special user interface linked to SAPHIRE that automates the SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events (for example, to calculate a conditional core damage probability) very efficiently and expeditiously. This report provides an overview of the functions

  12. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.


    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading.

  13. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    Nikabdullah, N. [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia and Institute of Space Science (ANGKASA), Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); Singh, S. S. K.; Alebrahim, R.; Azizi, M. A. [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); K, Elwaleed A. [Institute of Space Science (ANGKASA), Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (Malaysia); Noorani, M. S. M. [School of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia (Malaysia)


    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading.

  14. Application of FTA Method to Reliability Analysis of Vacuum Resin Shot Dosing Equipment


    Faults of vacuum resin shot dosing equipment are studied systematically and the fault tree of the system is constructed by using the fault tree analysis(FTA) method. Then the qualitative and quantitative analysis of the tree is carried out, respectively, and according to the results of the analysis, the measures to improve the system are worked out and implemented. As a result, the reliability of the equipment is enhanced greatly.

  15. Aviation Fuel System Reliability and Fail-Safety Analysis. Promising Alternative Ways for Improving the Fuel System Reliability

    I. S. Shumilov


    Full Text Available The paper deals with design requirements for an aviation fuel system (AFS, AFS basic design requirements, reliability, and design precautions to avoid AFS failure. Compares the reliability and fail-safety of AFS and aircraft hydraulic system (AHS, considers the promising alternative ways to raise reliability of fuel systems, as well as elaborates recommendations to improve reliability of the pipeline system components and pipeline systems, in general, based on the selection of design solutions.It is extremely advisable to design the AFS and AHS in accordance with Aviation Regulations АП25 and Accident Prevention Guidelines, ICAO (International Civil Aviation Association, which will reduce risk of emergency situations, and in some cases even avoid heavy disasters.ATS and AHS designs should be based on the uniform principles to ensure the highest reliability and safety. However, currently, this principle is not enough kept, and AFS looses in reliability and fail-safety as compared with AHS. When there are the examined failures (single and their combinations the guidelines to ensure the AFS efficiency should be the same as those of norm-adopted in the Regulations АП25 for AHS. This will significantly increase reliability and fail-safety of the fuel systems and aircraft flights, in general, despite a slight increase in AFS mass.The proposed improvements through the use of components redundancy of the fuel system will greatly raise reliability of the fuel system of a passenger aircraft, which will, without serious consequences for the flight, withstand up to 2 failures, its reliability and fail-safety design will be similar to those of the AHS, however, above improvement measures will lead to a slightly increasing total mass of the fuel system.It is advisable to set a second pump on the engine in parallel with the first one. It will run in case the first one fails for some reasons. The second pump, like the first pump, can be driven from the

  16. Analysis of the Kinematic Accuracy Reliability of a 3-DOF Parallel Robot Manipulator

    Guohua Cui


    Full Text Available Kinematic accuracy reliability is an important performance index in the evaluation of mechanism quality. By using a 3- DOF 3-PUU parallel robot manipulator as the research object, the position and orientation error model was derived by mapping the relation between the input and output of the mechanism. Three error sensitivity indexes that evaluate the kinematic accuracy of the parallel robot manipulator were obtained by adapting the singular value decomposition of the error translation matrix. Considering the influence of controllable and uncontrollable factors on the kinematic accuracy, the mathematical model of reliability based on random probability was employed. The measurement and calculation method for the evaluation of the mechanism’s kinematic reliability level was also provided. By analysing the mechanism’s errors and reliability, the law of surface error sensitivity for the location and structure parameters was obtained. The kinematic reliability of the parallel robot manipulator was statistically computed on the basis of the Monte Carlo simulation method. The reliability analysis of kinematic accuracy provides a theoretical basis for design optimization and error compensation.

  17. Markov Chain Modelling of Reliability Analysis and Prediction under Mixed Mode Loading

    SINGH Salvinder; ABDULLAH Shahrum; NIK MOHAMED Nik Abdullah; MOHD NOORANI Mohd Salmi


    The reliability assessment for an automobile crankshaft provides an important understanding in dealing with the design life of the component in order to eliminate or reduce the likelihood of failure and safety risks. The failures of the crankshafts are considered as a catastrophic failure that leads towards a severe failure of the engine block and its other connecting subcomponents. The reliability of an automotive crankshaft under mixed mode loading using the Markov Chain Model is studied. The Markov Chain is modelled by using a two-state condition to represent the bending and torsion loads that would occur on the crankshaft. The automotive crankshaft represents a good case study of a component under mixed mode loading due to the rotating bending and torsion stresses. An estimation of the Weibull shape parameter is used to obtain the probability density function, cumulative distribution function, hazard and reliability rate functions, the bathtub curve and the mean time to failure. The various properties of the shape parameter is used to model the failure characteristic through the bathtub curve is shown. Likewise, an understanding of the patterns posed by the hazard rate onto the component can be used to improve the design and increase the life cycle based on the reliability and dependability of the component. The proposed reliability assessment provides an accurate, efficient, fast and cost effective reliability analysis in contrast to costly and lengthy experimental techniques.

  18. Reliability Analysis of Distributed Grid-connected Photovoltaic System Monitoring Network

    Fu Zhixin


    Full Text Available A large amount of distributed grid-connected Photovoltaic systems have brought new challenges to the dispatching of power network. Real-time monitoring the PV system can efficiently help improve the ability of power network to accept and control the distributed PV systems, and thus mitigate the impulse on the power network imposed by the uncertainty of its power output. To study the reliability of distributed PV monitoring network, it is of great significance to look for a method to build a highly reliable monitoring system, analyze the weak links and key nodes of its monitoring performance in improving the performance of the monitoring network. Firstly a reliability model of PV system was constructed based on WSN technology. Then, in view of the dynamic characteristics of the network’s reliability, fault tree analysis was used to judge any possible reasons that cause the failure of the network and logical relationship between them. Finally, the reliability of the monitoring network was analyzed to figure out the weak links and key nodes. This paper provides guidance to build a stable and reliable monitoring network of a distributed PV system.

  19. Reduced Expanding Load Method for Simulation-Based Structural System Reliability Analysis

    远方; 宋丽娜; 方江生


    The current situation and difficulties of the structural system reliability analysis are mentioned. Then on the basis of Monte Carlo method and computer simulation, a new analysis method reduced expanding load method ( RELM ) is presented, which can be used to solve structural reliability problems effectively and conveniently. In this method, the uncertainties of loads, structural material properties and dimensions can be fully considered. If the statistic parameters of stochastic variables are known, by using this method, the probability of failure can be estimated rather accurately. In contrast with traditional approaches, RELM method gives a much better understanding of structural failure frequency and its reliability indexβ is more meaningful. To illustrate this new idea, a specific example is given.

  20. Vibration reliability analysis for aeroengine compressor blade based on support vector machine response surface method

    GAO Hai-feng; BAI Guang-chen


    To ameliorate reliability analysis efficiency for aeroengine components, such as compressor blade, support vector machine response surface method (SRSM) is proposed. SRSM integrates the advantages of support vector machine (SVM) and traditional response surface method (RSM), and utilizes experimental samples to construct a suitable response surface function (RSF) to replace the complicated and abstract finite element model. Moreover, the randomness of material parameters, structural dimension and operating condition are considered during extracting data so that the response surface function is more agreeable to the practical model. The results indicate that based on the same experimental data, SRSM has come closer than RSM reliability to approximating Monte Carlo method (MCM); while SRSM (17.296 s) needs far less running time than MCM (10958 s) and RSM (9840 s). Therefore, under the same simulation conditions, SRSM has the largest analysis efficiency, and can be considered a feasible and valid method to analyze structural reliability.

  1. Method and Application for Reliability Analysis of Measurement Data in Nuclear Power Plant

    Yun, Hun; Hwang, Kyeongmo; Lee, Hyoseoung [KEPCO E and C, Seoungnam (Korea, Republic of); Moon, Seungjae [Hanyang University, Seoul (Korea, Republic of)


    Pipe wall-thinning by flow-accelerated corrosion and various types of erosion is significant damage in secondary system piping of nuclear power plants(NPPs). All NPPs in Korea have management programs to ensure pipe integrity from degradation mechanisms. Ultrasonic test(UT) is widely used for pipe wall thickness measurement. Numerous UT measurements have been performed during scheduled outages. Wall-thinning rates are determined conservatively according to several evaluation methods developed by Electric Power Research Institute(EPRI). The issue of reliability caused by measurement error should be considered in the process of evaluation. The reliability analysis method was developed for single and multiple measurement data in the previous researches. This paper describes the application results of reliability analysis method to real measurement data during scheduled outage and proved its benefits.

  2. TREAT Transient Analysis Benchmarking for the HEU Core

    Kontogeorgakos, D. C. [Argonne National Lab. (ANL), Argonne, IL (United States); Connaway, H. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, A. E. [Argonne National Lab. (ANL), Argonne, IL (United States)


    This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used to determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.

  3. An Efficient Approach for the Reliability Analysis of Phased-Mission Systems with Dependent Failures

    Xing, Liudong; Meshkat, Leila; Donahue, Susan K.


    We consider the reliability analysis of phased-mission systems with common-cause failures in this paper. Phased-mission systems (PMS) are systems supporting missions characterized by multiple, consecutive, and nonoverlapping phases of operation. System components may be subject to different stresses as well as different reliability requirements throughout the course of the mission. As a result, component behavior and relationships may need to be modeled differently from phase to phase when performing a system-level reliability analysis. This consideration poses unique challenges to existing analysis methods. The challenges increase when common-cause failures (CCF) are incorporated in the model. CCF are multiple dependent component failures within a system that are a direct result of a shared root cause, such as sabotage, flood, earthquake, power outage, or human errors. It has been shown by many reliability studies that CCF tend to increase a system's joint failure probabilities and thus contribute significantly to the overall unreliability of systems subject to CCF.We propose a separable phase-modular approach to the reliability analysis of phased-mission systems with dependent common-cause failures as one way to meet the above challenges in an efficient and elegant manner. Our methodology is twofold: first, we separate the effects of CCF from the PMS analysis using the total probability theorem and the common-cause event space developed based on the elementary common-causes; next, we apply an efficient phase-modular approach to analyze the reliability of the PMS. The phase-modular approach employs both combinatorial binary decision diagram and Markov-chain solution methods as appropriate. We provide an example of a reliability analysis of a PMS with both static and dynamic phases as well as CCF as an illustration of our proposed approach. The example is based on information extracted from a Mars orbiter project. The reliability model for this orbiter considers

  4. Reliability Index for Reinforced Concrete Frames using Nonlinear Pushover and Dynamic Analysis

    Ahmad A. Fallah


    Full Text Available In the conventional design and analysis methods affecting parameters loads, materials' strength, etc are not set as probable variables. Safety factors in the current Codes and Standards are usually obtained on the basis of judgment and experience, which may be improper or uneconomical. In technical literature, a method based on nonlinear static analysis is suggested to set Reliability Index on strength of structural systems. In this paper, a method based on Nonlinear Dynamic analysis with rising acceleration (or Incremental Dynamic Analysis is introduced, the results of which are compared with those of the previous (Static Pushover Analysis method and two concepts namely Redundancy Strength and Redundancy Variations are proposed as an index to these impacts. The Redundancy Variation Factor and Redundancy Strength Factor indices for reinforced concrete frames with varying number of bays and stories and different ductility potentials are computed and ultimately, Reliability Index is determined using these two indices.

  5. Guidelines for reliability analysis of digital systems in PSA context. Phase 1 status report

    Authen, S.; Larsson, J. (Risk Pilot AB, Stockholm (Sweden)); Bjoerkman, K.; Holmberg, J.-E. (VTT, Helsingfors (Finland))


    Digital protection and control systems are appearing as upgrades in older nuclear power plants (NPPs) and are commonplace in new NPPs. To assess the risk of NPP operation and to determine the risk impact of digital system upgrades on NPPs, quantitative reliability models are needed for digital systems. Due to the many unique attributes of these systems, challenges exist in systems analysis, modeling and in data collection. Currently there is no consensus on reliability analysis approaches. Traditional methods have clearly limitations, but more dynamic approaches are still in trial stage and can be difficult to apply in full scale probabilistic safety assessments (PSA). The number of PSAs worldwide including reliability models of digital I and C systems are few. A comparison of Nordic experiences and a literature review on main international references have been performed in this pre-study project. The study shows a wide range of approaches, and also indicates that no state-of-the-art currently exists. The study shows areas where the different PSAs agree and gives the basis for development of a common taxonomy for reliability analysis of digital systems. It is still an open matter whether software reliability needs to be explicitly modelled in the PSA. The most important issue concerning software reliability is proper descriptions of the impact that software-based systems has on the dependence between the safety functions and the structure of accident sequences. In general the conventional fault tree approach seems to be sufficient for modelling reactor protection system kind of functions. The following focus areas have been identified for further activities: 1. Common taxonomy of hardware and software failure modes of digital components for common use 2. Guidelines regarding level of detail in system analysis and screening of components, failure modes and dependencies 3. Approach for modelling of CCF between components (including software). (Author)

  6. Analysis and Design of ITER 1 MV Core Snubber

    王海田; 李格


    The core snubber, as a passive protection device, can suppress arc current and absorb stored energy in stray capacitance during the electrical breakdown in accelerating electrodes of ITER NBI. In order to design the core snubber of ITER, the control parameters of the arc peak current have been firstly analyzed by the Fink-Baker-Owren (FBO) method, which are used for designing the DIIID 100 kV snubber. The B-H curve can be derived from the measured voltage and current waveforms, and the hysteresis loss of the core snubber can be derived using the revised parallelogram method. The core snubber can be a simplified representation as an equivalent parallel resistance and inductance, which has been neglected by the FBO method. A simulation code including the parallel equivalent resistance and inductance has been set up. The simulation and experiments result in dramatically large arc shorting currents due to the parallel inductance effect. The case shows that the core snubber utilizing the FBO method gives more compact design.

  7. Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT

    Fagundo, Arturo


    Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.

  8. Johnson Space Center's Risk and Reliability Analysis Group 2008 Annual Report

    Valentine, Mark; Boyer, Roger; Cross, Bob; Hamlin, Teri; Roelant, Henk; Stewart, Mike; Bigler, Mark; Winter, Scott; Reistle, Bruce; Heydorn,Dick


    The Johnson Space Center (JSC) Safety & Mission Assurance (S&MA) Directorate s Risk and Reliability Analysis Group provides both mathematical and engineering analysis expertise in the areas of Probabilistic Risk Assessment (PRA), Reliability and Maintainability (R&M) analysis, and data collection and analysis. The fundamental goal of this group is to provide National Aeronautics and Space Administration (NASA) decisionmakers with the necessary information to make informed decisions when evaluating personnel, flight hardware, and public safety concerns associated with current operating systems as well as with any future systems. The Analysis Group includes a staff of statistical and reliability experts with valuable backgrounds in the statistical, reliability, and engineering fields. This group includes JSC S&MA Analysis Branch personnel as well as S&MA support services contractors, such as Science Applications International Corporation (SAIC) and SoHaR. The Analysis Group s experience base includes nuclear power (both commercial and navy), manufacturing, Department of Defense, chemical, and shipping industries, as well as significant aerospace experience specifically in the Shuttle, International Space Station (ISS), and Constellation Programs. The Analysis Group partners with project and program offices, other NASA centers, NASA contractors, and universities to provide additional resources or information to the group when performing various analysis tasks. The JSC S&MA Analysis Group is recognized as a leader in risk and reliability analysis within the NASA community. Therefore, the Analysis Group is in high demand to help the Space Shuttle Program (SSP) continue to fly safely, assist in designing the next generation spacecraft for the Constellation Program (CxP), and promote advanced analytical techniques. The Analysis Section s tasks include teaching classes and instituting personnel qualification processes to enhance the professional abilities of our analysts

  9. A continuous-time Bayesian network reliability modeling and analysis framework

    Boudali, H.; Dugan, J.B.


    We present a continuous-time Bayesian network (CTBN) framework for dynamic systems reliability modeling and analysis. Dynamic systems exhibit complex behaviors and interactions between their components; where not only the combination of failure events matters, but so does the sequence ordering of th

  10. Reliability of ^1^H NMR analysis for assessment of lipid oxidation at frying temperatures

    The reliability of a method using ^1^H NMR analysis for assessment of oil oxidation at a frying temperature was examined. During heating and frying at 180 °C, changes of soybean oil signals in the ^1^H NMR spectrum including olefinic (5.16-5.30 ppm), bisallylic (2.70-2.88 ppm), and allylic (1.94-2.1...

  11. A continuous-time Bayesian network reliability modeling and analysis framework

    Boudali, H.; Dugan, J.B.


    We present a continuous-time Bayesian network (CTBN) framework for dynamic systems reliability modeling and analysis. Dynamic systems exhibit complex behaviors and interactions between their components; where not only the combination of failure events matters, but so does the sequence ordering of th

  12. Reliability of an Automated High-Resolution Manometry Analysis Program across Expert Users, Novice Users, and Speech-Language Pathologists

    Jones, Corinne A.; Hoffman, Matthew R.; Geng, Zhixian; Abdelhalim, Suzan M.; Jiang, Jack J.; McCulloch, Timothy M.


    Purpose: The purpose of this study was to investigate inter- and intrarater reliability among expert users, novice users, and speech-language pathologists with a semiautomated high-resolution manometry analysis program. We hypothesized that all users would have high intrarater reliability and high interrater reliability. Method: Three expert…

  13. Preliminary Uncertainty Analysis for SMART Digital Core Protection and Monitoring System

    Koo, Bon Seung; In, Wang Kee; Hwang, Dae Hyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)


    The Korea Atomic Energy Research Institute (KAERI) developed on-line digital core protection and monitoring systems, called SCOPS and SCOMS as a part of SMART plant protection and monitoring system. SCOPS simplified the protection system by directly connecting the four RSPT signals to each core protection channel and eliminated the control element assembly calculator (CEAC) hardware. SCOMS adopted DPCM3D method in synthesizing core power distribution instead of Fourier expansion method being used in conventional PWRs. The DPCM3D method produces a synthetic 3-D power distribution by coupling a neutronics code and measured in-core detector signals. The overall uncertainty analysis methodology which is used statistically combining uncertainty components of SMART core protection and monitoring system was developed. In this paper, preliminary overall uncertainty factors for SCOPS/SCOMS of SMART initial core were evaluated by applying newly developed uncertainty analysis method

  14. Magnetic, Structural, and Particle Size Analysis of Single- and Multi-Core Magnetic Nanoparticles

    Ludwig, Frank; Kazakova, Olga; Barquin, Luis Fernandez


    We have measured and analyzed three different commercial magnetic nanoparticle systems, both multi-core and single-core in nature, with the particle (core) size ranging from 20 to 100 nm. Complementary analysis methods and same characterization techniques were carried out in different labs...... and the results are compared with each other. The presented results primarily focus on determining the particle size—both the hydrodynamic size and the individual magnetic core size—as well as magnetic and structural properties. The used analysis methods include transmission electron microscopy, static...... and dynamic magnetization measurements, and Mössbauer spectroscopy. We show that particle (hydrodynamic and core) size parameters can be determined from different analysis techniques and the individual analysis results agree reasonably well. However, in order to compare size parameters precisely determined...

  15. Code assessment and modelling for Design Basis Accident analysis of the European Sodium Fast Reactor design. Part II: Optimised core and representative transients analysis

    Lazaro, A., E-mail: [JRC-IET European Commission, Westerduinweg 3, PO BOX 2, 1755 ZG Petten (Netherlands); Schikorr, M. [KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Mikityuk, K. [PSI, Paul Scherrer Institut, 5232 Villigen (Switzerland); Ammirabile, L. [JRC-IET European Commission, Westerduinweg 3, PO BOX 2, 1755 ZG Petten (Netherlands); Bandini, G. [ENEA, Via Martiri di Monte Sole 4, 40129 Bologna (Italy); Darmet, G.; Schmitt, D. [EDF, 1 Avenue du Général de Gaulle, 92141 Clamart (France); Dufour, Ph.; Tosello, A. [CEA, St. Paul lez Durance, 13108 Cadarache (France); Gallego, E.; Jimenez, G. [UPM, José Gutiérrez Abascal, 2, 28006 Madrid (Spain); Bubelis, E.; Ponomarev, A.; Kruessmann, R.; Struwe, D. [KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Stempniewicz, M. [NRG, Utrechtseweg 310, P.O. Box-9034, 6800 ES Arnhem (Netherlands)


    Highlights: • Benchmarked models have been applied for the analysis of DBA transients of the ESFR design. • Two system codes are able to simulate the behavior of the system beyond sodium boiling. • The optimization of the core design and its influence in the transients’ evolution is described. • The analysis has identified peak values and grace times for the protection system design. - Abstract: The new reactor concepts proposed in the Generation IV International Forum require the development and validation of computational tools able to assess their safety performance. In the first part of this paper the models of the ESFR design developed by several organisations in the framework of the CP-ESFR project were presented and their reliability validated via a benchmarking exercise. This second part of the paper includes the application of those tools for the analysis of design basis accident (DBC) scenarios of the reference design. Further, this paper also introduces the main features of the core optimisation process carried out within the project with the objective to enhance the core safety performance through the reduction of the positive coolant density reactivity effect. The influence of this optimised core design on the reactor safety performance during the previously analysed transients is also discussed. The conclusion provides an overview of the work performed by the partners involved in the project towards the development and enhancement of computational tools specifically tailored to the evaluation of the safety performance of the Generation IV innovative nuclear reactor designs.

  16. McCARD for Neutronics Design and Analysis of Research Reactor Cores

    Shim, Hyung Jin; Park, Ho Jin; Kwon, Soonwoo; Seo, Geon Ho; Hyo Kim, Chang


    McCARD is a Monte Carlo (MC) neutron-photon transport simulation code developed exclusively for the neutronics design and analysis of nuclear reactor cores. McCARD is equipped with the hierarchical modeling and scripting functions, the CAD-based geometry processing module, the adjoint-weighted kinetics parameter and source multiplication factor estimation modules as well as the burnup analysis capability for the neutronics design and analysis of both research and power reactor cores. This paper highlights applicability of McCARD for the research reactor core neutronics analysis, as demonstrated for Kyoto University Critical Assembly, HANARO, and YALINA.

  17. An efficient hybrid reliability analysis method with random and interval variables

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping


    Random and interval variables often coexist. Interval variables make reliability analysis much more computationally intensive. This work develops a new hybrid reliability analysis method so that the probability analysis (PA) loop and interval analysis (IA) loop are decomposed into two separate loops. An efficient PA algorithm is employed, and a new efficient IA method is developed. The new IA method consists of two stages. The first stage is for monotonic limit-state functions. If the limit-state function is not monotonic, the second stage is triggered. In the second stage, the limit-state function is sequentially approximated with a second order form, and the gradient projection method is applied to solve the extreme responses of the limit-state function with respect to the interval variables. The efficiency and accuracy of the proposed method are demonstrated by three examples.

  18. A new method for high-resolution methane measurements on polar ice cores using continuous flow analysis.

    Schüpbach, Simon; Federer, Urs; Kaufmann, Patrik R; Hutterli, Manuel A; Buiron, Daphné; Blunier, Thomas; Fischer, Hubertus; Stocker, Thomas F


    Methane (CH4) is the second most important anthropogenic greenhouse gas in the atmosphere. Rapid variations of the CH4 concentration, as frequently registered, for example, during the last ice age, have been used as reliable time markers for the definition of a common time scale of polar ice cores. In addition, these variations indicate changes in the sources of methane primarily associated with the presence of wetlands. In order to determine the exact time evolution of such fast concentration changes, CH4 measurements of the highest resolution in the ice core archive are required. Here, we present a new, semicontinuous and field-deployable CH4 detection method, which was incorporated in a continuous flow analysis (CFA) system. In CFA, samples cut along the axis of an ice core are melted at a melt speed of typically 3.5 cm/min. The air from bubbles in the ice core is extracted continuously from the meltwater and forwarded to a gas chromatograph (GC) for high-resolution CH4 measurements. The GC performs a measurement every 3.5 min, hence, a depth resolution of 15 cm is achieved atthe chosen melt rate. An even higher resolution is not necessary due to the low pass filtering of air in ice cores caused by the slow bubble enclosure process and the diffusion of air in firn. Reproducibility of the new method is 3%, thus, for a typical CH4 concentration of 500 ppb during an ice age, this corresponds to an absolute precision of 15 ppb, comparable to traditional analyses on discrete samples. Results of CFA-CH4 measurements on the ice core from Talos Dome (Antarctica) illustrate the much higher temporal resolution of our method compared with established melt-refreeze CH4 measurements and demonstrate the feasibility of the new method.

  19. Reliability Analysis of Piezoelectric Truss Structures Under Joint Action of Electric and Mechanical Loading

    YANG Duo-he; AN Wei-guang; ZHU Rong-rong; MIAO Han


    Based on the finite element method(FEM) for the dynamical analysis of piezoelectric truss structures, the expressions of safety margins of strength fracture and damage electric field in the structure element are given considering electromechanical coupling effect under the joint action of electric and mechanical load. By importing the stochastic FEM,reliability of piezoelectric truss structures is analyzed by solving for partial derivative in the process of solving dynamical response of structure system with mode-superposition method. The influence of electromechanical coupling effect to reliability index is then analyzed through an example.

  20. Signal Quality Outage Analysis for Ultra-Reliable Communications in Cellular Networks

    Gerardino, Guillermo Andrés Pocovi; Alvarez, Beatriz Soret; Lauridsen, Mads


    , we investigate the potential of several techniques to combat these main threats. The analysis shows that traditional microscopic multiple-input multiple-output schemes with 2x2 or 4x4 antenna configurations are not enough to fulfil stringent reliability requirements. It is revealed how such antenna...... schemes must be complemented with macroscopic diversity as well as interference management techniques in order to ensure the necessary SINR outage performance. Based on the obtained performance results, it is discussed which of the feasible options fulfilling the ultra-reliable criteria are most promising...

  1. Fuzzy Fatigue Reliability Analysis of Offshore Platforms in Ice-Infested Waters

    方华灿; 段梦兰; 贾星兰; 谢彬


    The calculation of fatigue stress ranges due to random waves and ice loads on offshore structures is discussed, and the corresponding accumulative fatigue damages of the structural members are evaluated. To evaluate the fatigue damage to the structures more accurately, the Miner rule is modified considering the fuzziness of the concerned parameters, and a new model for fuzzy fatigue reliability analysis of offshore structures members is developed. Furthermore, an assessment method for predicting the dynamics of the fuzzy fatigue reliability of structural members is provided.

  2. Tensile reliability analysis for gravity dam foundation surface based on FEM and response surface method

    Tong-chun LI; Li, Dan-Dan; Wang, Zhi-Qiang


    In the paper, the limit state equation of tensile reliability of foundation base of gravity dam is established. The possible crack length is set as action effect and the allowance crack length is set as resistance in this limit state. The nonlinear FEM is applied to obtain the crack length of foundation base of gravity dam, and linear response surface method based on the orthogonal test design method is used to calculate the reliability,which offered an reasonable and simple analysis method t...


    Giovanni Francesco Spatola


    Full Text Available The use of image analysis methods has allowed us to obtain more reliable and repro-ducible immunohistochemistry (IHC results. Wider use of such approaches and sim-plification of software allowing a colorimetric study has meant that these methods are available to everyone, and made it possible to standardize the technique by a reliable systems score. Moreover, the recent introduction of multispectral image acquisition systems methods has further refined these techniques, minimizing artefacts and eas-ing the evaluation of the data by the observer.


    Dars, P.; Ternisien D'Ouville, T.; Mingam, H.; Merckel, G.


    Statistical analysis of asymmetry in LDD NMOSFETs electrical characteristics shows the influence of implantation angles on non-overlap variation observed on devices realized on a 100 mm wafer and within the wafers of a batch . The study of the consequence of this dispersion on the aging behaviour illustrates the importance of this parameter for reliability and the necessity to take it in account for accurate analysis of stress results.

  5. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.


    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  6. Analysis of the Gas Core Actinide Transmutation Reactor (GCATR)

    Clement, J. D.; Rust, J. H.


    Design power plant studies were carried out for two applications of the plasma core reactor: (1) As a breeder reactor, (2) As a reactor able to transmute actinides effectively. In addition to the above applications the reactor produced electrical power with a high efficiency. A reactor subsystem was designed for each of the two applications. For the breeder reactor, neutronics calculations were carried out for a U-233 plasma core with a molten salt breeding blanket. A reactor was designed with a low critical mass (less than a few hundred kilograms U-233) and a breeding ratio of 1.01. The plasma core actinide transmutation reactor was designed to transmute the nuclear waste from conventional LWR's. The spent fuel is reprocessed during which 100% of Np, Am, Cm, and higher actinides are separated from the other components. These actinides are then manufactured as oxides into zirconium clad fuel rods and charged as fuel assemblies in the reflector region of the plasma core actinide transmutation reactor. In the equilibrium cycle, about 7% of the actinides are directly fissioned away, while about 31% are removed by reprocessing.

  7. The design and use of reliability data base with analysis tool

    Doorepall, J.; Cooke, R.; Paulsen, J.; Hokstadt, P.


    With the advent of sophisticated computer tools, it is possible to give a distributed population of users direct access to reliability component operational histories. This allows the user a greater freedom in defining statistical populations of components and selecting failure modes. However, the reliability data analyst`s current analytical instrumentarium is not adequate for this purpose. The terminology used in organizing and gathering reliability data is standardized, and the statistical methods used in analyzing this data are not always suitably chosen. This report attempts to establish a baseline with regard to terminology and analysis methods, to support the use of a new analysis tool. It builds on results obtained in several projects for the ESTEC and SKI on the design of reliability databases. Starting with component socket time histories, we identify a sequence of questions which should be answered prior to the employment of analytical methods. These questions concern the homogeneity and stationarity of (possible dependent) competing failure modes and the independence of competing failure modes. Statistical tests, some of them new, are proposed for answering these questions. Attention is given to issues of non-identifiability of competing risk and clustering of failure-repair events. These ideas have been implemented in an analysis tool for grazing component socket time histories, and illustrative results are presented. The appendix provides background on statistical tests and competing failure modes. (au) 4 tabs., 17 ills., 61 refs.

  8. CARES/LIFE Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.


    This manual describes the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction (CARES/LIFE) computer program. The program calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. CARES/LIFE is an extension of the CARES (Ceramic Analysis and Reliability Evaluation of Structures) computer program. The program uses results from MSC/NASTRAN, ABAQUS, and ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker law. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled by using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. The probabilistic time-dependent theories used in CARES/LIFE, along with the input and output for CARES/LIFE, are described. Example problems to demonstrate various features of the program are also included.

  9. A model for reliability analysis and calculation applied in an example from chemical industry

    Pejović Branko B.


    Full Text Available The subject of the paper is reliability design in polymerization processes that occur in reactors of a chemical industry. The designed model is used to determine the characteristics and indicators of reliability, which enabled the determination of basic factors that result in a poor development of a process. This would reduce the anticipated losses through the ability to control them, as well as enabling the improvement of the quality of production, which is the major goal of the paper. The reliability analysis and calculation uses the deductive method based on designing of a scheme for fault tree analysis of a system based on inductive conclusions. It involves the use standard logical symbols and rules of Boolean algebra and mathematical logic. The paper eventually gives the results of the work in the form of quantitative and qualitative reliability analysis of the observed process, which served to obtain complete information on the probability of top event in the process, as well as objective decision making and alternative solutions.

  10. A Study on Management Techniques of Power Telecommunication System by Reliability Analysis

    Lee, B.K.; Lee, B.S.; Woy, Y.H.; Oh, M.T.; Shin, M.T.; Kwan, O.G. [Korea Electric Power Corp. (KEPCO), Taejon (Korea, Republic of). Research Center; Kim, K.H.; Kim, Y.H.; Lee, W.T.; Park, Y.H.; Lee, J.J.; Park, H.S.; Choi, M.C.; Kim, J. [Korea Electrotechnology Research Inst., Changwon (Korea, Republic of)


    Power telecommunication network is being increased rapidly in that expansion of power facilities according to the growth of electric power supply. The requirement of power facility and office automation and importance of communication services make it to complex and confusing to operate. And, for the sake of correspond to the change of power telecommunication network, effective operation and management is called for urgently. Therefore, the object of this study is to establish total reliability analysis system based on dependability, maintainability, cost effectiveness and replenishment for keep up reasonable reliability, support economical maintenance and reasonable planning of facility investment. And it will make effective management and administration system and schemes for total reliability improvement. (author). 44 refs., figs.

  11. Reliability Analysis of Component Software in Wireless Sensor Networks Based on Transformation of Testing Data

    Chunyan Hou


    Full Text Available We develop an approach of component software reliability analysis which includes the benefits of both time domain, and structure based approaches. This approach overcomes the deficiency of existing NHPP techniques that fall short of addressing repair, and internal system structures simultaneously. Our solution adopts a method of transformation of testing data to cover both methods, and is expected to improve reliability prediction. This paradigm allows component-based software testing process doesn’t meet the assumption of NHPP models, and accounts for software structures by the way of modeling the testing process. According to the testing model it builds the mapping relation from the testing profile to the operational profile which enables the transformation of the testing data to build the reliability dataset required by NHPP models. At last an example is evaluated to validate and show the effectiveness of this approach.

  12. A hybrid algorithm for reliability analysis combining Kriging and subset simulation importance sampling

    Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin [Northeastern University, Shenyang (China); Wang, Shuang [Jiangxi University of Science and Technology, Ganzhou (China)


    To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.

  13. Reactivity insertion transient analysis for KUR low-enriched uranium silicide fuel core

    Shen, Xiuzhong; Nakajima, Ken; Unesaki, Hironobu; Mishima, Kaichiro


    The purpose of this study is to realize the full core conversion from the use of High Enriched Uranium (HEU) fuels to the use of Low Enriched Uranium (LEU) fuels in Kyoto University Research Reactor (KUR). Although the conversion of nuclear energy sources is required to keep the safety margins and reactor reliability based on KUR HEU core, the uranium density (3.2 gU/cm3) and enrichment (20%) of LEU fuel (U3Si2–AL) are quite different from the uranium density (0.58 gU/cm3) and enrichment (93%...

  14. Statistical Degradation Models for Reliability Analysis in Non-Destructive Testing

    Chetvertakova, E. S.; Chimitova, E. V.


    In this paper, we consider the application of the statistical degradation models for reliability analysis in non-destructive testing. Such models enable to estimate the reliability function (the dependence of non-failure probability on time) for the fixed critical level using the information of the degradation paths of tested items. The most widely used models are the gamma and Wiener degradation models, in which the gamma or normal distributions are assumed as the distribution of degradation increments, respectively. Using the computer simulation technique, we have analysed the accuracy of the reliability estimates, obtained for considered models. The number of increments can be enlarged by increasing the sample size (the number of tested items) or by increasing the frequency of measuring degradation. It has been shown, that the sample size has a greater influence on the accuracy of the reliability estimates in comparison with the measuring frequency. Moreover, it has been shown that another important factor, influencing the accuracy of reliability estimation, is the duration of observing degradation process.

  15. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)


    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  16. Probabilistic durability assessment of concrete structures in marine environments: Reliability and sensitivity analysis

    Yu, Bo; Ning, Chao-lie; Li, Bing


    A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.

  17. Reliability analysis for the 220 kV Libyan high voltage communication system

    Saleh, O.S.A.; AlAthram, A.Y. [General Electric Company of Libya (Libyan Arab Jamahiriya). Development Dept.


    Electric utilities are expanding their networks to include fiber-optic communications, which offer high capacity with reliable performance at low cost. Fiber-optic networks offer a feasible technical solution for leasing excess capacity. They can be readily deployed under a wide range of network configurations and can be upgraded rapidly. This study evaluated the reliability index for the communication network of Libya's 220 kV high voltage subsystem operated by the General Electric Company of Libya (GECOL). The schematic diagrams of the communication networks were presented for both power line carriers and fiber optics networks. A reliability analysis for the two communication networks was performed through the existing communication equipment. The reliability values revealed that the fiber optics system has several advantages such as a large bandwidth for high quality data transmission; immunity to electromagnetic interference; low attenuation which allows for extended cable transmission; ability to be used in dangerous environments; a higher degree of security; and, a high capacity through existing conduits due to its light weight and small diameter. However, it was noted that although fiber optic communications may be more reliable and provide the clearest signal, the powerline communication (PLC) system has more redundancy, particularly in the case of outdoor components where the PLC has more power line to carry the signals, while the fiber optic communications depend only on the earthing wire of the high voltage transmission line. 4 refs., 8 tabs., 6 figs.

  18. An Intelligent Method for Structural Reliability Analysis Based on Response Surface

    桂劲松; 刘红; 康海贵


    As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surface method is always used because it has a very clear train of thought and simple programming. However, the traditional response surface method fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surface method of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.


    Z.-G. Zhou


    Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  20. Implications in adjusting a gravity network with observations medium or independent: analysis of precision and reliability

    Pedro L. Faggion


    Full Text Available Adjustment strategies associated to the methodology applied used to the implantation of a gravity network of high precision in Paraná are presented. A network was implanted with stations in 21 places in the State of Paraná and one in the state of São Paulo To reduce the risk of the losing of points of that gravity network, they were established on the points of the GPS High Precision Network of Paraná, which possess a relatively homogeneous geographical distribution. For each one of the gravity lines belonging to the loops implanted for the network, it was possible to obtain three or six observations. In the first strategy, of adjustment investigated, for the net, it was considered, as observation, the medium value of the observations obtained for each gravity line. In the second strategy, of the adjustment, the observations were considered independent. The comparison of those strategies revealed that only the precision criteria is not enough to indicate the great solution of a gravity network. It was verified that there is need to use an additional criterion for analysis of the adjusted solution of the network, besides the precision criteria. The reliability criterion for geodesic networks, which becomes separated in reliability internal and external reliability it was used. The reliability internal it was used to verify the rigidity with which the network reacts in the detection and quantification of existent gross errors in the observations, and the reliability external in the quantification of the influence on the adjusted parameters of the errors non located. They are presented the aspects that differentiate the obtained solutions, when they combine the precision criteria and reliability criteria in the analysis of the quality of a gravity network.

  1. Radiocarbon analysis of stratospheric CO2 retrieved from AirCore sampling

    Paul, Dipayan; Chen, Huilin; Been, Henk A.; Kivi, Rigel; Meijer, Harro A. J.


    Radiocarbon (14C) is an important atmospheric tracer and one of the many used in the understanding of the global carbon budget, which includes the greenhouse gases CO2 and CH4. Measurement of radiocarbon in atmospheric CO2 generally requires the collection of large air samples (a few liters) from which CO2 is extracted and then the concentration of radiocarbon is determined using accelerator mass spectrometry (AMS). However, the regular collection of air samples from the stratosphere, for example using aircraft and balloons, is prohibitively expensive. Here we describe radiocarbon measurements in stratospheric CO2 collected by the AirCore sampling method. AirCore is an innovative atmospheric sampling system, which comprises a long tube descending from a high altitude with one end open and the other closed, and it has been demonstrated to be a reliable, cost-effective sampling system for high-altitude profile (up to ≈ 30 km) measurements of CH4 and CO2. In Europe, AirCore measurements have been being performed on a regular basis near Sodankylä (northern Finland) since September 2013. Here we describe the analysis of samples from two such AirCore flights made there in July 2014, for determining the radiocarbon concentration in stratospheric CO2. The two AirCore profiles were collected on consecutive days. The stratospheric part of the AirCore was divided into six sections, each containing ≈ 35 µg CO2 ( ≈ 9.6 µgC), and stored in a stratospheric air subsampler constructed from 1/4 in. coiled stainless steel tubing ( ≈ 3 m). A small-volume extraction system was constructed that enabled > 99.5 % CO2 extraction from the stratospheric air samples. Additionally, a new small-volume high-efficiency graphitization system was constructed for graphitization of these extracted CO2 samples, which were measured at the Groningen AMS facility. Since the stratospheric samples were very similar in mass, reference samples were also prepared in the same mass range for

  2. Discussion about modeling the effects of neutron flux exposure for nuclear reactor core analysis

    Vondy, D.R.


    Methods used to calculate the effects of exposure to a neutron flux are described. The modeling of the nuclear-reactor core history presents an analysis challenge. The nuclide chain equations must be solved, and some of the methods in use for this are described. Techniques for treating reactor-core histories are discussed and evaluated.

  3. Improvement of Axial Reflector Cross Section Generation Model for PWR Core Analysis

    Shim, Cheon Bo; Lee, Kyung Hoon; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)


    This paper covers the study for improvement of axial reflector XS generation model. In the next section, the improved 1D core model is represented in detail. Reflector XS generated by the improved model is compared to that of the conventional model in the third section. Nuclear design parameters generated by these two XS sets are also covered in that section. Significant of this study is discussed in the last section. Two-step procedure has been regarded as the most practical approach for reactor core designs because it offers core design parameters quite rapidly within acceptable range. Thus this approach is adopted for SMART (System-integrated Modular Advanced Reac- Tor) core design in KAERI with the DeCART2D1.1/ MASTER4.0 (hereafter noted as DeCART2D/ MASTER) code system. Within the framework of the two-step procedure based SMART core design, various researches have been studied to improve the core design reliability and efficiency. One of them is improvement of reflector cross section (XS) generation models. While the conventional FA/reflector two-node model used for most core designs to generate reflector XS cannot consider the actual configuration of fuel rods that intersect at right angles to axial reflectors, the revised model reflects the axial fuel configuration by introducing the radially simplified core model. The significance of the model revision is evaluated by observing HGC generated by DeCART2D, reflector XS, and core design parameters generated by adopting the two models. And it is verified that about 30 ppm CBC error can be reduced and maximum Fq error decreases from about 6 % to 2.5 % by applying the revised model. Error of AO and axial power shapes are also reduced significantly. Therefore it can be concluded that the simplified 1D core model improves the accuracy of the axial reflector XS and leads to the two-step procedure reliability enhancement. Since it is hard for core designs to be free from the two-step approach, it is necessary to find

  4. Performance modeling and analysis of parallel Gaussian elimination on multi-core computers

    Fadi N. Sibai


    Full Text Available Gaussian elimination is used in many applications and in particular in the solution of systems of linear equations. This paper presents mathematical performance models and analysis of four parallel Gaussian Elimination methods (precisely the Original method and the new Meet in the Middle –MiM– algorithms and their variants with SIMD vectorization on multi-core systems. Analytical performance models of the four methods are formulated and presented followed by evaluations of these models with modern multi-core systems’ operation latencies. Our results reveal that the four methods generally exhibit good performance scaling with increasing matrix size and number of cores. SIMD vectorization only makes a large difference in performance for low number of cores. For a large matrix size (n ⩾ 16 K, the performance difference between the MiM and Original methods falls from 16× with four cores to 4× with 16 K cores. The efficiencies of all four methods are low with 1 K cores or more stressing a major problem of multi-core systems where the network-on-chip and memory latencies are too high in relation to basic arithmetic operations. Thus Gaussian Elimination can greatly benefit from the resources of multi-core systems, but higher performance gains can be achieved if multi-core systems can be designed with lower memory operation, synchronization, and interconnect communication latencies, requirements of utmost importance and challenge in the exascale computing age.

  5. Core influence on the frequency response analysis (FRA of power transformers through the finite element method

    D. L. Alvarez


    Full Text Available In this paper the influence of core parameters in Frequency Response Analysis is analyzed through the equivalent circuit impedance matrix of the transformer winding; the parameters of the circuit have been computed using the Finite Element Method. In order to appreciate the behavior of the iron core in comparison to the air core, the frequency dependence of resonances is calculated to show how the air core only influences the results at low frequencies. The core is modeled using a complex permeability, and the parameters of conductivity and permeability are varied to show their influence in the resonances, which turned out to be negligible. In order to explain this behavior, the eigenvalues of the inverse impedance matrix are calculated showing that they are similar for different values of conductivity and permeability. Finally, the magnetic flux inside and outside the core and its influence in the frequency response is studied.

  6. Rasch analysis of the postconcussive symptom questionnaire: measuring the core construct of brain injury symptomatology.

    Gardizi, Elmar; Millis, Scott R; Hanks, Robin; Axelrod, Bradley


    The Postconcussive Symptom Questionnaire (PCSQ; Lees-Haley, 1992 ) is purported to measure four constructs. These include psychological, cognitive, somatic, and infrequency (i.e., items intended to reflect negative impression management) symptoms. The utility and validity of Postconcussive Syndrome (PCS) as a diagnostic condition continues to be debated. To this end, examining the instruments used to measure postconcussive symptoms can increase our understanding with respect to this issue. The aim of this study was to derive a revised PCSQ to target the core construct of subjective symptoms reported by persons with traumatic brain injury (TBI). A total of 133 people with mild to severe TBI completed the 45-item PCSQ. Items were scored dichotomously, as symptom present or absent. Rasch analysis, based on the mathematical model formulated by Rasch ( 1960 ), was used to derive the revised PCSQ. Misfitting and redundant items were removed and a second model containing 19 items was fitted. The revised PCSQ-19 had superior psychometric qualities; reliability was 0.81. The PCSQ-19 provides a more targeted, unidimensional assessment of subjective symptoms following brain injury. The findings also revealed information related to symptom hierarchy which can further our understanding of PCS.

  7. Supermode analysis of the 18-core photonic crystal fiber laser

    王远; 姚建铨; 郑一博; 温午麒; 陆颖; 王鹏


    The modal of 18-core photonic crystal fiber laser is discussed and calculated.And corresponding far-field distribution of the supermodes is given by Fresnel diffraction integral.For improving beam quality,the mode selection method based on the Talbot effect is introduced.The reflection coefficients are calculated,and the result shows that an in-phase supermode can be locked better at a large propagation distance.

  8. Reliability analysis of stochastic structural system considering static strength, stiffness and fatigue

    AN WeiGuang; ZHAO WeiTao; AN Hai


    Multi-failures are possible to appear in the process of using the structural system,such as dead load failure, fatigue failure and stiffness failure. The expression of residual resistance is given based on the impact of random crack propagation induced by the fatigue load on the critical limit stress and section modulus in this paper. The failure modes of every element of the structural system are analyzed under dead and fatigue loads, and the influence of the correlation of failure modes on reliability of the element is considered. Failure mechanism and the correlation of failure modes under dead and fatigue loads are discussed, and the method of reliability analysis considering static strength, fatigue and stiffness is given. A numerical example is analyzed, which indicates that the failure probability is different for different use life and the influence of dead and fatigue loads on reliability of the structural system is different as well. This method of reliability analysis, in the paper, is better than the method only considering a single factor (or static strength, or fatigue, or stiffness, etc.) in the case of practical engineering.

  9. Application of Support Vector Machine to Reliability Analysis of Engine Systems

    Zhang Xinfeng


    Full Text Available Reliability analysis plays a very important role for assessing the performance and making maintenance plans of engine systems. This research presents a comparative study of the predictive performances of support vector machines (SVM , least square support vector machine (LSSVM and neural network time series models for forecasting failures and reliability in engine systems. Further, the reliability indexes of engine systems are computed by the weibull probability paper programmed with Matlab. The results shows that the probability distribution of the forecasting outcomes is consistent to the distribution of the actual data, which all follow weibull distribution and the predictions by SVM and LSSVM can provide accurate predictions of the characteristic life. So SVM and LSSVM are both another choice of engine system reliability analysis. Moreover, the predictive precise of the method based on LSSVM is higher than that of SVM. In small samples, the prediction by LSSVM will be more popular, because its compution cost is lower and the precise can be more satisfied.

  10. Reliability and Sensitivity Analysis of Transonic Flutter Using Improved Line Sampling Technique

    Song Shufang; Lu Zhenzhou; Zhang Weiwei; Ye Zhengyin


    The improved line sampling (LS) technique, an effective numerical simulation method, is employed to analyze the probabilistic characteristics and reliability sensitivity of flutter with random structural parameter in transonic flow. The improved LS technique is a novel methodology for reliability and sensitivity analysis of high dimensionality and low probability problem with implicit limit state function, and it does not require any approximating surrogate of the implicit limit state equation. The improved LS is used to estimate the flutter reliability and the sensitivity of a two-dimensional wing, in which some structural properties, such as frequency, parameters of gravity center and mass ratio, are considered as random variables. Computational fluid dynamics (CFD) based unsteady aerodynamic reduced order model (ROM) method is used to construct the aerodynamic state equations. Coupling structural state equations with aerodynamic state equations, the safety margin of flutter is founded by using the critical velocity of flutter. The results show that the improved LS technique can effectively decrease the computational cost in the random uncertainty analysis of flutter. The reliability sensitivity, defined by the partial derivative of the failure probability with respect to the distribution parameter of random variable, can help to identify the important parameters and guide the structural optimization design.

  11. Evaluating the safety risk of roadside features for rural two-lane roads using reliability analysis.

    Jalayer, Mohammad; Zhou, Huaguo


    The severity of roadway departure crashes mainly depends on the roadside features, including the sideslope, fixed-object density, offset from fixed objects, and shoulder width. Common engineering countermeasures to improve roadside safety include: cross section improvements, hazard removal or modification, and delineation. It is not always feasible to maintain an object-free and smooth roadside clear zone as recommended in design guidelines. Currently, clear zone width and sideslope are used to determine roadside hazard ratings (RHRs) to quantify the roadside safety of rural two-lane roadways on a seven-point pictorial scale. Since these two variables are continuous and can be treated as random, probabilistic analysis can be applied as an alternative method to address existing uncertainties. Specifically, using reliability analysis, it is possible to quantify roadside safety levels by treating the clear zone width and sideslope as two continuous, rather than discrete, variables. The objective of this manuscript is to present a new approach for defining the reliability index for measuring roadside safety on rural two-lane roads. To evaluate the proposed approach, we gathered five years (2009-2013) of Illinois run-off-road (ROR) crash data and identified the roadside features (i.e., clear zone widths and sideslopes) of 4500 300ft roadway segments. Based on the obtained results, we confirm that reliability indices can serve as indicators to gauge safety levels, such that the greater the reliability index value, the lower the ROR crash rate.

  12. Moment Method Based on Fuzzy Reliability Sensitivity Analysis for a Degradable Structural System

    Song Jun; Lu Zhenzhou


    For a degradable structural system with fuzzy failure region, a moment method based on fuzzy reliability sensitivity algorithm is presented. According to the value assignment of porformance function, the integral region for calculating the fuzzy failure probability is first split into a series of subregions in which the membership function values of the performance function within the fuzzy failure region can be approximated by a set of constants. The fuzzy failure probability is then transformed into a sum of products oftbe random failure probabilities and the approximate constants of the membership function in the subregions. Furthermore, the fuzzy reliability sensitivity analysis is transformed into a series of random reliability sensitivity analysis, and the random reliability sensitivity can be obtained by the constructed moment method. The primary advantages of the presented method include higher efficiency for implicit performance function with low and medium dimensionality and wide applicability to multiple failure modes and nonnormal basic random variables. The limitation is that the required computation effort grows exponentially with the increase of dimensionality of the basic random vari-able; hence, it is not suitable for high dimensionality problem. Compared with the available methods, the presented one is pretty com-petitive in the case that the dimensionality is lower than 10. The presented examples are used to verify the advantages and indicate the limitations.

  13. Analysis of reliability metrics and quality enhancement measures in current density imaging.

    Foomany, F H; Beheshti, M; Magtibay, K; Masse, S; Foltz, W; Sevaptsidis, E; Lai, P; Jaffray, D A; Krishnan, S; Nanthakumar, K; Umapathy, K


    Low frequency current density imaging (LFCDI) is a magnetic resonance imaging (MRI) technique which enables calculation of current pathways within the medium of study. The induced current produces a magnetic flux which presents itself in phase images obtained through MRI scanning. A class of LFCDI challenges arises from the subject rotation requirement, which calls for reliability analysis metrics and specific image registration techniques. In this study these challenges are formulated and in light of proposed discussions, the reliability analysis of calculation of current pathways in a designed phantom and a pig heart is presented. The current passed is measured with less than 5% error for phantom, using CDI method. It is shown that Gauss's law for magnetism can be treated as reliability metric in matching the images in two orientations. For the phantom and pig heart the usefulness of image registration for mitigation of rotation errors is demonstrated. The reliability metric provides a good representation of the degree of correspondence between images in two orientations for phantom and pig heart. In our CDI experiments this metric produced values of 95% and 26%, for phantom, and 88% and 75% for pig heart, for mismatch rotations of 0 and 20 degrees respectively.

  14. Assessing validity and reliability of Resting Metabolic Rate in six gas analysis systems

    Cooper, Jamie A.; Watras, Abigail C.; O’Brien, Matthew J.; Luke, Amy; Dobratz, Jennifer R.; Earthman, Carrie P.; Schoeller, Dale A.


    The Deltatrac Metabolic Monitor (DTC), one of the most popular indirect calorimetry systems for measuring resting metabolic rate (RMR) in human subjects, is no longer being manufactured. This study compared five different gas analysis systems to the DTC. Resting metabolic rate was measured by the DTC and at least one other instrument at three study sites for a total of 38 participants. The five indirect calorimetry systems included: MedGraphics CPX Ultima, MedGem, Vmax Encore 29 System, TrueOne 2400, and Korr ReeVue. Validity was assessed using paired t-tests to compare means while reliability was assessed by using both paired t-tests and root mean square calculations with F tests for significance. Within-subject comparisons for validity of RMR revealed a significant difference between the DTC and Ultima. Bland-Altman plot analysis showed significant bias with increasing RMR values for the Korr and MedGem. Respiratory exchange ratio (RER) analysis showed a significant difference between the DTC and the Ultima and a trend for a difference with the Vmax (p = 0.09). Reliability assessment for RMR revealed that all instruments had a significantly larger coefficient of variation (CV) (ranging from 4.8% to 10.9%) for RMR compared to the 3.0 % CV for the DTC. Reliability assessment for RER data showed none of the instrument CV’s were significantly larger than the DTC CV. The results were quite disappointing, with none of the instruments equaling the within person reliability of the DTC. The TrueOne and Vmax were the most valid instruments in comparison with the DTC for both RMR and RER assessment. Further testing is needed to identify an instrument with the reliability and validity of the DTC. PMID:19103333

  15. Space Shuttle Rudder Speed Brake Actuator-A Case Study Probabilistic Fatigue Life and Reliability Analysis

    Oswald, Fred B.; Savage, Michael; Zaretsky, Erwin V.


    The U.S. Space Shuttle fleet was originally intended to have a life of 100 flights for each vehicle, lasting over a 10-year period, with minimal scheduled maintenance or inspection. The first space shuttle flight was that of the Space Shuttle Columbia (OV-102), launched April 12, 1981. The disaster that destroyed Columbia occurred on its 28th flight, February 1, 2003, nearly 22 years after its first launch. In order to minimize risk of losing another Space Shuttle, a probabilistic life and reliability analysis was conducted for the Space Shuttle rudder/speed brake actuators to determine the number of flights the actuators could sustain. A life and reliability assessment of the actuator gears was performed in two stages: a contact stress fatigue model and a gear tooth bending fatigue model. For the contact stress analysis, the Lundberg-Palmgren bearing life theory was expanded to include gear-surface pitting for the actuator as a system. The mission spectrum of the Space Shuttle rudder/speed brake actuator was combined into equivalent effective hinge moment loads including an actuator input preload for the contact stress fatigue and tooth bending fatigue models. Gear system reliabilities are reported for both models and their combination. Reliability of the actuator bearings was analyzed separately, based on data provided by the actuator manufacturer. As a result of the analysis, the reliability of one half of a single actuator was calculated to be 98.6 percent for 12 flights. Accordingly, each actuator was subsequently limited to 12 flights before removal from service in the Space Shuttle.

  16. Analysis of White Dwarfs with Strange-Matter Cores

    Mathews, G J; O'Gorman, B; Lan, N Q; Zech, W; Otsuki, K; Weber, F


    We summarize masses and radii for a number of white dwarfs as deduced from a combination of proper motion studies, Hipparcos parallax distances, effective temperatures, and binary or spectroscopic masses. A puzzling feature of these data is that some stars appear to have radii which are significantly smaller than that expected for a standard electron-degenerate white-dwarf equations of state. We construct a projection of white-dwarf radii for fixed effective mass and conclude that there is at least marginal evidence for bimodality in the radius distribution forwhite dwarfs. We argue that if such compact white dwarfs exist it is unlikely that they contain an iron core. We propose an alternative of strange-quark matter within the white-dwarf core. We also discuss the impact of the so-called color-flavor locked (CFL) state in strange-matter core associated with color superconductivity. We show that the data exhibit several features consistent with the expected mass-radius relation of strange dwarfs. We identify ...

  17. Description and Analysis of Core Samples: The Lunar Experience

    McKay, David S.; Allton, Judith H.


    Although no samples yet have been returned from a comet, extensive experience from sampling another solar system body, the Moon, does exist. While, in overall structure, composition, and physical properties the Moon bears little resemblance to what is expected for a comet, sampling the Moon has provided some basic lessons in how to do things which may be equally applicable to cometary samples. In particular, an extensive series of core samples has been taken on the Moon, and coring is the best way to sample a comet in three dimensions. Data from cores taken at 24 Apollo collection stations and 3 Luna sites have been used to provide insight into the evolution of the lunar regolith. It is now well understood that this regolith is very complex and reflects gardening (stirring of grains by micrometeorites), erosion (from impacts and solar wind sputtering), maturation (exposure on the bare lunar surface to solar winds ions and micrometeorite impacts) and comminution of coarse grains into finer grains, blanket deposition of coarse-grained layers, and other processes. All of these processes have been documented in cores. While a cometary regolith should not be expected to parallel in detail the lunar regolith, it is possible that the upper part of a cometary regolith may include textural, mineralogical, and chemical features which reflect the original accretion of the comet, including a form of gardening. Differences in relative velocities and gravitational attraction no doubt made this accretionary gardening qualitatively much different than the lunar version. Furthermore, at least some comets, depending on their orbits, have been subjected to impacts of the uppermost surface by small projectiles at some time in their history. Consequently, a more recent post-accretional gardening may have occurred. Finally, for comets which approach the sun, large scale erosion may have occurred driven by gas loss. The uppermost material of these comets may reflect some of the features

  18. Reliability, risk and availability analysis and evaluation of a port oil pipeline transportation system in constant operation conditions

    Kolowrocki, Krzysztof [Gdynia Maritime University, Gdynia (Poland)


    In the paper the multi-state approach to the analysis and evaluation of systems' reliability, risk and availability is practically applied. Theoretical definitions and results are illustrated by the example of their application in the reliability, risk and availability evaluation of an oil pipeline transportation system. The pipeline transportation system is considered in the constant in time operation conditions. The system reliability structure and its components reliability functions are not changing in constant operation conditions. The system reliability structure is fixed with a high accuracy. Whereas, the input reliability characteristics of the pipeline components are not sufficiently exact because of the lack of statistical data necessary for their estimation. The results may be considered as an illustration of the proposed methods possibilities of applications in pipeline systems reliability analysis. (author)

  19. Reliability Analysis of Brittle Material Structures - Including MEMS(?) - With the CARES/Life Program

    Nemeth, Noel N.


    Brittle materials are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts. thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The CARES/Life code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. For this presentation an interview of the CARES/Life program will be provided. Emphasis will be placed on describing the latest enhancements to the code for reliability analysis with time varying loads and temperatures (fully transient reliability analysis). Also, early efforts in investigating the validity of using Weibull statistics, the basis of the CARES/Life program, to characterize the strength of MEMS structures will be described as as well as the version of CARES/Life for MEMS (CARES/MEMS) being prepared which incorporates single crystal and edge flaw reliability analysis capability. It is hoped this talk will open a dialog for potential collaboration in the area of MEMS testing and life prediction.

  20. Constellation Ground Systems Launch Availability Analysis: Enhancing Highly Reliable Launch Systems Design

    Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.


    Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.


    Galkina Elena Vladislavovna


    Full Text Available In the article the reliability analysis methods of bidders and their tenders offers for implementation of construction works are offered. The special attention is focused on the complexity of these processes and the necessity of participation of serious, professional and responsible executors. Application of the described methods leads to a conclusion on the decrease of risks related to selection of a participant of a construction pro-ject. In the article the main stages of the implementation procedure are defined. That allows considering the economic state of applicants, and both economic and technical indicators of tender offers’ reliability. The main characteristics to be considered on each stage are revealed. The author makes a conclusion that the reliability of bidders is determined by the comparison of their economic states with the capacities for implementation of the orders with the specified characteristics. According to the terminology of the article, the reliability of applicant’s of-fers is the ability to execute orders on the bidder’s own conditions. In addition the author states that determining the reliability is based on the comparison of tender offers and contender’s characteristics of objects. The rational methods to compare economic indicators are offered. Along with this, it was found out that at the current moment the method of comparing the technical indicators for the projects-analogues with the indicators of a bidder’s object is not formalized. It limits the application of this method. Finally, it was concluded that the development of the methods applied to technical indicators provided a coherent system for evaluating the reliability of the construction bidders and their offers. It allows creating the basis for the development of appropriate automated system that can be used both for selection of competitive organizations and for preparation of offers by the applicants.

  2. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    Iskandar, Ismed; Satria Gondokaryono, Yudi


    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  3. Development of human reliability analysis methodology and its computer code during low power/shutdown operation

    Chung, Chang Hyun; You, Young Woo; Huh, Chang Wook; Kim, Ju Yeul; Kim Do Hyung; Kim, Yoon Ik; Yang, Hui Chang [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hansung University, Seoul (Korea, Republic of)


    The objective of this study is to develop the appropriate procedure that can evaluate the human error in LP/S(lower power/shutdown) and the computer code that calculate the human error probabilities(HEPs) using this framework. The assessment of applicability of the typical HRA methodologies to LP/S is conducted and a new HRA procedure, SEPLOT (Systematic Evaluation Procedure for LP/S Operation Tasks) which presents the characteristics of LP/S is developed by selection and categorization of human actions by reviewing present studies. This procedure is applied to evaluate the LOOP(Loss of Off-site Power) sequence and the HEPs obtained by using SEPLOT are used to quantitative evaluation of the core uncovery frequency. In this evaluation one of the dynamic reliability computer codes, DYLAM-3 which has the advantages against the ET/FT is used. The SEPLOT developed in this study can give the basis and arrangement as to the human error evaluation technique. And this procedure can make it possible to assess the dynamic aspects of accidents leading to core uncovery applying the HEPs obtained by using the SEPLOT as input data to DYLAM-3 code, Eventually, it is expected that the results of this study will contribute to improve safety in LP/S and reduce uncertainties in risk. 57 refs. 17 tabs., 33 figs. (author)

  4. Efficient Approximate Method of Global Reliability Analysis for Offshore Platforms in the Ice Zone


    Ice load is the dominative load in the design of offshore platforms in the ice zone, and the extreme ice load is the key factor that affects the safety of platforms. The present paper studies the statistical properties of the global resistance and the extreme responses of the jacket platforms in Bohai Bay, considering the randomness of ice load, dead load, steel elastic modulus, yield strength and structural member dimensions. Then, based on the above results, an efficient approximate method of the global reliability analysis for the offshore platforms is proposed, which converts the implicit nonlinear performance function in the conventional reliability analysis to linear explicit one. Finally, numerical examples of JZ20-2 MSW, JZ20-2NW and JZ20-2 MUQ offshore jacket platforms in the Bohai Bay demonstrate the satisfying efficiency, accuracy and applicability of the proposed method.

  5. Grain-size analysis of sediment cores collected in 2009 offshore from Palos Verdes, California

    U.S. Geological Survey, Department of the Interior — This part of the data release includes grain-size analysis of sediment cores collected in 2009 offshore of Palos Verdes, California. It is one of seven files...

  6. Analysis of suprathermal nuclear processes in the solar core plasma

    Voronchev, Victor T.; Nakao, Yasuyuki; Watanabe, Yukinobu


    A consistent model for the description of suprathermal processes in the solar core plasma naturally triggered by fast particles generated in exoergic nuclear reactions is formulated. This model, based on the formalism of in-flight reaction probability, operates with different methods of treating particle slow-down in the plasma, and allows for the influence of electron degeneracy and electron screening on processes in the matter. The model is applied to examine slowing-down of 8.7 MeV α-particles produced in the {}7{Li}(p,α )α reaction of the pp chain, and to analyze suprathermal processes in the solar CNO cycle induced by them. Particular attention is paid to the suprathermal {}14{{N}}{(α ,{{p}})}17{{O}} reaction unappreciated in standard solar model simulations. It is found that an appreciable non-standard (α ,p) nuclear flow due to this reaction appears in the matter and modifies running of the CNO cycle in ∼95% of the solar core region. In this region at R> 0.1{R}ȯ , normal branching of nuclear flow {}14{{N}}≤ftarrow {}17{{O}}\\to {(}18{{F}})\\to {}18{{O}} transforms to abnormal sequential flow {}14{{N}}\\to {}17{{O}}\\to {(}18{{F}})\\to {}18{{O}}, altering some element abundances. In particular, nuclear network calculations reveal that in the outer core the abundances of 17O and 18O isotopes can increase by a factor of 20 as compared with standard estimates. A conjecture is made that other CNO suprathermal (α ,p) reactions may also affect abundances of CNO elements, including those generating solar neutrinos.

  7. Effect of wine dilution on the reliability of tannin analysis by protein precipitation

    Jensen, Jacob Skibsted; Werge, Hans Henrik Malmborg; Egebo, Max


    A reported analytical method for tannin quantification relies on selective precipitation of tannins with bovine serum albumin. The reliability of tannin analysis by protein precipitation on wines having variable tannin levels was evaluated by measuring the tannin concentration of various dilutions...... of five commercial red wines. Tannin concentrations of both very diluted and concentrated samples were systematically underestimated, which could be explained by a precipitation threshold and insufficient protein for precipitation, respectively. Based on these findings, we have defined a valid range...

  8. Reliability Information Analysis Center 1st Quarter 2007, Technical Area Task (TAT) Report


    07 planning conference 14 Dec 06 II Marine Expeditionary Force (MEF) meeting with Major Smith 14 Dec 06 Gulf of Mexico Tyndall Air Force Base Missile...Restructured action item spreadsheet " Reviewed the following storyboards (functional flow, graphics and text): 1. 050101 Main Rotor System components 2... storyboards (functional flow, graphics, and text): o 050101 Main Rotor System components. Reliability Information Analysis Center 6000 Flanagan Road

  9. A Reliability Analysis of a Rainfall Harvesting System in Southern Italy

    Lorena Liuzzo; Vincenza Notaro; Gabriele Freni


    Rainwater harvesting (RWH) may be an effective alternative water supply solution in regions affected by water scarcity. It has recently become a particularly important option in arid and semi-arid areas (like Mediterranean basins), mostly because of its many benefits and affordable costs. This study provides an analysis of the reliability of using a rainwater harvesting system to supply water for toilet flushing and garden irrigation purposes, with reference to a single-family home in a resid...

  10. Application of the Simulation Based Reliability Analysis on the LBB methodology

    Pečínka L.; Švrček M.


    Guidelines on how to demonstrate the existence of Leak Before Break (LBB) have been developed in many western countries. These guidelines, partly based on NUREG/CR-6765, define the steps that should be fulfilled to get a conservative assessment of LBB acceptability. As a complement and also to help identify the key parameters that influence the resulting leakage and failure probabilities, the application of Simulation Based Reliability Analysis is under development. The used methodology will ...

  11. Towards increased reliability by objectification of Hazard Analysis and Risk Assessment (HARA) of automated automotive systems

    Khastgir, Siddartha; Birrell, Stewart A.; Dhadyalla, Gunwant; Sivencrona, Håkan; Jennings, P. A. (Paul A.)


    Hazard Analysis and Risk Assessment (HARA) in various domains like automotive, aviation, process industry etc. suffer from the issues of validity and reliability. While there has been an increasing appreciation of this subject, there have been limited approaches to overcome these issues. In the automotive domain, HARA is influenced by the ISO 26262 international standard which details functional safety of road vehicles. While ISO 26262 was a major step towards analysing hazards and risks, lik...

  12. An evaluation of the reliability and usefulness of external-initiator PRA (probabilistic risk analysis) methodologies

    Budnitz, R.J.; Lambert, H.E. (Future Resources Associates, Inc., Berkeley, CA (USA))


    The discipline of probabilistic risk analysis (PRA) has become so mature in recent years that it is now being used routinely to assist decision-making throughout the nuclear industry. This includes decision-making that affects design, construction, operation, maintenance, and regulation. Unfortunately, not all sub-areas within the larger discipline of PRA are equally mature,'' and therefore the many different types of engineering insights from PRA are not all equally reliable. 93 refs., 4 figs., 1 tab.

  13. Coupled-mode analysis for single-helix chiral fiber gratings with small core-offset

    Li Yang; Linlin Xue; Jue Su; Jingren Qian


    Using conventional coupled-mode theory,a set of coupled-mode equations are formulated for single-helix chiral fiber long-period gratings.A helical-core fiber is analyzed as an example.The analysis is simple in mathematical form and intuitive in physical concept.Based on the analysis,the polarization independence of mode coupling in special fiber gratings is revealed.The transmission characteristics of helical-core fibers are also simulated and discussed.

  14. Reliability and Security Analysis on Two-Cell Dynamic Redundant System

    Hongsheng Su


    Full Text Available Based on analysis on reliability and security on three types of two-cell dynamic redundant systems which has been widely applied in modern railway signal system, whose isomorphic Markov model was established in this paper. During modeling several important factors, including common-cause failure, coverage of diagnostic systems, online maintainability, and periodic inspection maintenance, and as well as many failure modes, were considered, which made the established model more credible. Through analysis and calculation on reliability and security indexes of the three types of two-module dynamic redundant structures, the paper acquires a significant conclusion, i.e., the safety and reliability of the kind of structure possesses an upper limit, and can not be inordinately improved through the hardware and software comparison methods under the failure and repairing rate fixed. Finally, the paper performs the simulation investigations, and compares the calculation results of the three redundant systems, and analysis each advantages and disadvantages, and gives out each application scope, which provides a theoretical technical support for the railway signal equipments selection.

  15. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    Bucknor, Matthew D.; Grabaskas, David; Brunett, Acacia J.; Grelle, Austin


    Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended due to deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Centering on an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive reactor cavity cooling system following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. While this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability for the reactor cavity cooling system (and the reactor system in general) to the postulated transient event.

  16. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    Matthew Bucknor


    Full Text Available Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general for the postulated transient event.

  17. Advanced reactor passive system reliability demonstration analysis for an external event

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.; Grelle, Austin [Argonne National Laboratory, Argonne (United States)


    Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.

  18. Time-dependent Reliability Analysis of Flood Defence Assets Using Generic Fragility Curve

    Nepal Jaya


    Full Text Available Flood defence assets such as earth embankments comprise the vital part of linear flood defences in many countries including the UK and protect inland from flooding. The risks of flooding are likely to increase in the future due to increasing pressure on land use, increasing rainfall events and rising sea level caused by climate change also affect aging flood defence assets. Therefore, it is important that the flood defence assets are maintained at a high level of safety and serviceability. The high costs associated with preserving these deteriorating flood defence assets and the limited funds available for their maintenance require the development of systematic approaches to ensure the sustainable flood-risk management system. The integration of realistic deterioration measurement and reliabilitybased performance assessment techniques has tremendous potential for structural safety and economic feasibility of flood defence assets. Therefore, the need for reliability-based performance assessment is evident. However, investigations on time-dependent reliability analysis of flood defence assets are limited. This paper presents a novel approach for time-dependent reliability analysis of flood defence assets. In the analysis, time-dependent fragility curve is developed by using the state-based stochastic deterioration model. The applicability of the proposed approach is then demonstrated with a case study.

  19. Reliability Analysis of a 3-Machine Power Station Using State Space Approach

    WasiuAkande Ahmed


    Full Text Available With the advent of high-integrity fault-tolerant systems, the ability to account for repairs of partially failed (but still operational systems become increasingly important. This paper presents a systemic method of determining the reliability of a 3-machine electric power station, taking into consideration the failure rates and repair rates of the individual component (machine that make up the system. A state-space transition process for a 3-machine with 23 states was developed and consequently, steady state equations were generated based on Markov mathematical modeling of the power station. Important reliability components were deduced from this analysis. This research simulation was achieved with codes written in Excel® -VBA programming environment. System reliability using state space approach proofs to be a viable and efficient technique of reliability prediction as it is able to predict the state of the system under consideration. For the purpose of neatness and easy entry of data, Graphic User Interface (GUI was designed.

  20. A new approach for interexaminer reliability data analysis on dental caries calibration

    Andréa Videira Assaf


    Full Text Available Objectives: a to evaluate the interexaminer reliability in caries detection considering different diagnostic thresholds and b to indicate, by using Kappa statistics, the best way of measuring interexaminer agreement during the calibration process in dental caries surveys. Methods: Eleven dentists participated in the initial training, which was divided into theoretical discussions and practical activities, and calibration exercises, performed at baseline, 3 and 6 months after the initial training. For the examinations of 6-7-year-old schoolchildren, the World Health Organization (WHO recommendations were followed and different diagnostic thresholds were used: WHO (decayed/missing/filled teeth - DMFT index and WHO + IL (initial lesion diagnostic thresholds. The interexaminer reliability was calculated by Kappa statistics, according to WHO and WHO+IL thresholds considering: a the entire dentition; b upper/lower jaws; c sextants; d each tooth individually. Results: Interexaminer reliability was high for both diagnostic thresholds; nevertheless, it decreased in all calibration sections when considering teeth individually. Conclusion: The interexaminer reliability was possible during the period of 6 months, under both caries diagnosis thresholds. However, great disagreement was observed for posterior teeth, especially using the WHO+IL criteria. Analysis considering dental elements individually was the best way of detecting interexaminer disagreement during the calibration sections.