The article is devoted to the 10-th anniversary of Kharkov Department (KhD) of SSTC NRS and reviews its creation prehistory (works on reliability of process automated control system carried out earlier by KhD scientists), basic results of KhD activities, and its future trends
The state of the art in safety and reliability assessment of the software of industrial computer systems is reviewed and likely progress over the next few years is identified and compared with the perceived needs of the user. Some of the current projects contributing to the development of new techniques for assessing software reliability are described. One is the software test and evaluation method which looked at the faults within and between two manufacturers specifications, faults in the codes and inconsistencies between the codes and specifications. The results are given. (author)
Popyrin, L.S.; Nefedov, Yu.V.
The problem of substantiation of rational and methods of ensurance of NPP reliability at the stage of its designing has been studied. It is shown that the optimal level of NPP reliability is determined by coordinating solution of the proiblems for optimization of reliability of power industry, heat and power supply and nuclear power generation systems comprising NPP, and problems of reliability optimization of NPP proper, as a complex engineering system. The conclusion is made that the greatest attention should be paid to the development of mathematical models of reliability, taking into account different methods of equipment redundancy, as well as dependence of failures on barious factors, improvement of NPP reliability indices, development of data base, working out of the complec of consistent standards of reliability. 230 refs.; 2 figs.; 1 tab
The first recorded usage of the word reliability dates back to the 1800s, albeit referred to a person and not a technical system. Since then, the concept of reliability has become a pervasive attribute worth of both qualitative and quantitative connotations. In particular, the revolutionary social, cultural and technological changes that have occurred from the 1800s to the 2000s have contributed to the need for a rational framework and quantitative treatment of the reliability of engineered systems and plants. This has led to the rise of reliability engineering as a scientific discipline. In this paper, some considerations are shared with respect to a number of problems and challenges which researchers and practitioners in reliability engineering are facing when analyzing today's complex systems. The focus will be on the contribution of reliability to system safety and on its role within system risk analysis
Castillo, Enrique; Minguez, Roberto; Castillo, Carmen
The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods
Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: firstname.lastname@example.org; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: email@example.com; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: firstname.lastname@example.org
The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.
Nozhnitsky, Yu A.
Requirements to advanced engines for civil aviation are discussing. Some significant problems of ensuring reliability of advanced gas turbine engines are mentioned. Special attention is paid to successful utilization of new materials and critical technologies. Also the problem of excluding failure of engine part due to low cycle or high cycle fatigue is discussing.
Full Text Available Parameter estimation uncertainty is often neglected in reliability studies, i.e. point estimates of distribution parameters are used for representative fractiles, and in probabilistic models. A numerical example examines the effect of this uncertainty on structural reliability using Bayesian statistics. The study reveals that the neglect of parameter estimation uncertainty might lead to an order of magnitude underestimation of failure probability.
Rasmussen, J.; Taylor, J.R.
The basis for plant operator reliability evaluation is described. Principles for plant design, necessary to permit reliability evaluation, are outlined. Five approaches to the plant operator reliability problem are described. Case stories, illustrating operator reliability problems, are given. (author)
Popkov, V I; Demirchyan, K S
This general survey deals with approaches to the resolution of such problems as the gathering, analysis and systematization of data on component defects in power equipment and setting up feedback with the manufacturing plants and planning organizations to improve equipment reliability. Such efforts on the part of designers, manufacturers and operating and repair organizations in analyzing faults in 300 MW turbogenerators during 1974-1977 reduced the specific fault rate by 20 to 25% and the downtime per failure by 35 to 40%. Since power equipment should operate for several hundreds of thousands of hours (20 to 30 years) and the majority of power components have guaranteed service lives of no more than 10/sup 5/ hours, an extremely difficult problem is the determination of the reliability of equipment past the 10/sup 5/ point. The present trend in the USSR Unified Power System towards increasing the number of shutdowns and startups, which in the case of turbogenerators of up 1200 MW power can reach 7500 to 10,000 cycles is noted. Other areas briefly treated are: MHD generator reliability and economy; nuclear power plant reliability and safety; the reliability of high-power high-voltage thyristor converters; the difficulties involved in scale modeling of power system reliability and the high cost of the requisite full-scale studies; the poor understanding of long term corrosion and erosion processes. The review concludes with arguments in favor of greater computerization of all aspects of power system management.
Yalaoui, Alice; Chu, Chengbin; Chatelet, Eric
In order to improve system reliability, designers may introduce in a system different technologies in parallel. When each technology is composed of components in series, the configuration belongs to the series-parallel systems. This type of system has not been studied as much as the parallel-series architecture. There exist no methods dedicated to the reliability allocation in series-parallel systems with different technologies. We propose in this paper theoretical and practical results for the allocation problem in a series-parallel system. Two resolution approaches are developed. Firstly, a one stage problem is studied and the results are exploited for the multi-stages problem. A theoretical condition for obtaining the optimal allocation is developed. Since this condition is too restrictive, we secondly propose an alternative approach based on an approximated function and the results of the one-stage study. This second approach is applied to numerical examples
P. Balasubramanie; S. K. Senthil Kumar
Problem statement: Cloud is purely a dynamic environment and the existing task scheduling algorithms are mostly static and considered various parameters like time, cost, make span, speed, scalability, throughput, resource utilization, scheduling success rate and so on. Available scheduling algorithms are mostly heuristic in nature and more complex, time consuming and does not consider reliability and availability of the cloud computing environment. Therefore there is a need to implement a sch...
M. F. Anop
Full Text Available Probability-statistical framework of reliability theory uses models based on the chance failures analysis. These models are not functional and do not reflect relation of reliability characteristics to the object performance. At the same time, a significant part of the technical systems failures are gradual failures caused by degradation of the internal parameters of the system under the influence of various external factors.The paper shows how to provide the required level of reliability at the design stage using a functional model of a technical object. Paper describes the method for solving this problem under incomplete initial information, when there is no information about the patterns of technological deviations and degradation parameters, and the considered system model is a \\black box" one.To this end, we formulate the problem of optimal parametric synthesis. It lies in the choice of the nominal values of the system parameters to satisfy the requirements for its operation and take into account the unavoidable deviations of the parameters from their design values during operation. As an optimization criterion in this case we propose to use a deterministic geometric criterion \\reliability reserve", which is the minimum distance measured along the coordinate directions from the nominal parameter value to the acceptability region boundary rather than statistical values.The paper presents the results of the application of heuristic swarm intelligence methods to solve the formulated optimization problem. Efficiency of particle swarm algorithms and swarm of bees one compared with undirected random search algorithm in solving a number of test optimal parametric synthesis problems in three areas: reliability, convergence rate and operating time. The study suggests that the use of a swarm of bees method for solving the problem of the technical systems gradual failure reliability ensuring is preferred because of the greater flexibility of the
Ghavidel, Sahand; Azizivahed, Ali; Li, Li
This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.
Learmonth, Yvonne C.; Paul, Lorna; McFadyen, Angus K.; Mattison, Paul; Miller, Linda
The aim of the study was to establish the test-retest reliability, clinical significance and precision of four mobility and balance measures--the Timed 25-Foot Walk, Six-minute Walk, Timed Up and Go and the Berg Balance Scale--in individuals moderately affected by multiple sclerosis. Twenty four participants with multiple sclerosis (Extended…
Amico, P.J.; Hsu, C.J.; Youngblood, R.W.; Fitzpatrick, R.G.
This paper reports that as part of a probabilistic assessment of the safety significance of complex transients at certain PWR power plants, it was necessary to perform a cognitive human reliability analysis. To increase the confidence in the results, it was desirable to make use of actual observations of operator response which were available for the assessment. An approach was developed which incorporated these observations into the human cognitive reliability (HCR) modeling approach. The results obtained provided additional insights over what would have been found using other approaches. These insights were supported by the observations, and it is suggested that this approach be considered for use in future probabilistic safety assessments
Sørensen, John Dalsgaard
are discussed. Limit state equations are presented for fatigue limit states and for ultimate limit states with extreme wind load, and illustrated by bending failure. Illustrative examples are presented, and as a part of the results optimal reliability levels are obtained which corresponds to an annual...... reliability index equal to 3. An example with fatigue failure indicates that the reliability level is almost the same for single wind turbines and for wind turbines in wind farms if the wake effects are modeled equivalently in the design equation and the limit state equation....
Amoebic liver abscess still poses a serious clinical problem in tropical countries. Here we describe three complicated cases to illustrate the magnitude this disease condition could assume in the tropics. Limited access to health facilities as well as poverty and ignorance result in patients presenting late, often with ...
Bjorner, Jakob B; Rose, Matthias; Gandek, Barbara
OBJECTIVES: To test the impact of the method of administration (MOA) on score level, reliability, and validity of scales developed in the Patient Reported Outcomes Measurement Information System (PROMIS). STUDY DESIGN AND SETTING: Two nonoverlapping parallel forms each containing eight items from......, no significant mode differences were found and all confidence intervals were within the prespecified minimal important difference of 0.2 standard deviation. Parallel-forms reliabilities were very high (ICC = 0.85-0.93). Only one across-mode ICC was significantly lower than the same-mode ICC. Tests of validity...... questionnaire (PQ), personal digital assistant (PDA), or personal computer (PC) and a second form by PC, in the same administration. Method equivalence was evaluated through analyses of difference scores, intraclass correlations (ICCs), and convergent/discriminant validity. RESULTS: In difference score analyses...
Chichirova, N. D.; Chichirov, A. A.; Saitov, S. R.
The introduction of baromembrane water treatment technologies for water desalination at Russian thermal power plants was beganed more than 25 years ago. These technologies have demonstrated their definite advantage over the traditional technologies of additional water treatment for steam boilers. However, there are problems associated with the reliability and economy of their work. The first problem is a large volume of waste water (up to 60% of the initial water). The second problem a expensive and unique chemical reagents complex (biocides, antiscalants, washing compositions) is required for units stable and troublefree operation. Each manufacturer develops his own chemical composition for a certain membrane type. This leads to a significant increase in reagents cost, as well as creates dependence of the technology consumer on the certain supplier. The third problem is that the reliability of the baromembrane units depends directly on the water preliminary treatment. The popular pre-cleaning technology with coagulation of aluminum oxychloride proves to be unacceptable during seasonal changes in the quality of the source water at a number of stations. As a result, pollution, poisoning and lesion of the membrane structure or deterioration of their mechanical properties are observed. The report presents ways to solve these problems.
Full Text Available The Problem Solving Inventory (PSI is designed to measure adults’ perceptions of problem-solving ability. The presented study aimed to translate it and assess its reliability and validity in a nationwide sample of 3668 Greek educators. In order to evaluate internal consistency reliability, Cronbach’s alpha coefficient was used. The scale’s construct validity was examined by a confirmatory factor analysis (CFA and by investigating its correlation with the Internality, Powerful others and Chance Multidimensional Locus of Control Scale (IPC LOC Scale, the Rosenberg Self-Esteem Scale (RSES and demographic information. Internal consistency reliability was satisfactory with Cronbach’s alphas ranging from 0.79 to 0.91 for all PSI scales. CFA confirmed that the bi-level model fitted the data well. The root mean square error of approximation (RMSEA, the comparative fit index (CFI and the goodness of fit index (GFI values were 0.030, 0.97 and 0.96, respectively, further confirming the bi-level model and the three-factors construct of the PSI. Intercorrelations and correlation coefficients between the PSI, the IPC LOC Scale and the RSES were significant. Age, sex, and working experience differences were found. In conclusion, the Greek version of the PSI was found to have satisfactory psychometric properties and therefore, it can be used to evaluate Greek teachers’ perceptions of their problem-solving skills.
Mazumdar, M.; Marshall, J.A.; Chay, S.C.
The problem of controlling a variable Y such that the probability of its exceeding a specified design limit L is very small, is treated. This variable is related to a set of random variables Xsub(i) by means of a known function Y=f(Xsub(i)). The following approximate methods are considered for estimating the propagation of error in the Xsub(i)'s through the function f(-): linearization; method of moments; Monte Carlo methods; numerical integration. Response surface and associated design of experiments problems as well as statistical inference problems are discussed. (Auth.)
Nieuwenhuis, S.; Forstmann, B.U.; Wagenmakers, E.-J.
In theory, a comparison of two experimental effects requires a statistical test on their difference. In practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (P < 0.05) but the
Takaki, Jiro; Taniguchi, Toshiyo; Fujii, Yasuhito
The purpose of this study was to assess the validity and reliability of the Sense of Contribution Scale (SCS), a newly developed, 7-item questionnaire used to measure sense of contribution in the workplace. Workers at 272 organizations answered questionnaires that included the SCS. Because of non-participation or missing data, the number of subjects included in the analyses for internal consistency and validity varied from 1,675 to 2,462 (response rates 54.6%-80.2%). Fifty-four workers were included in the analysis of test-retest reliability (response rate, 77.1%). The SCS showed high internal consistency (Cronbach's α coefficients in men and women were 0.85 and 0.86, respectively) and test-retest reliability (intraclass correlation coefficient = 0.91). Significant (p workplace bullying, and procedural and interactional justice. The SCS is a psychometrically satisfactory measure of sense of contribution in the workplace. The SCS provides a new and useful instrument to measure sense of contribution, which is independently associated with mental health in workers, for studies in organizational science, occupational health psychology and occupational medicine.
Full Text Available The purpose of this study was to assess the validity and reliability of the Sense of Contribution Scale (SCS, a newly developed, 7-item questionnaire used to measure sense of contribution in the workplace. Workers at 272 organizations answered questionnaires that included the SCS. Because of non-participation or missing data, the number of subjects included in the analyses for internal consistency and validity varied from 1,675 to 2,462 (response rates 54.6%–80.2%. Fifty-four workers were included in the analysis of test–retest reliability (response rate, 77.1%. The SCS showed high internal consistency (Cronbach’s α coefficients in men and women were 0.85 and 0.86, respectively and test–retest reliability (intraclass correlation coefficient = 0.91. Significant (p < 0.001, positive, moderate correlations were found between the SCS score and scores for organization-based self-esteem and work engagement in both genders, which support the SCS’s convergent and discriminant validity. The criterion validity of the SCS was supported by the finding that in both genders, the SCS scores were significantly (p < 0.05 and inversely associated with psychological distress and sleep disturbance in crude and in multivariable analyses that adjusted for demographics, organization-based self-esteem, work engagement, effort–reward ratio, workplace bullying, and procedural and interactional justice. The SCS is a psychometrically satisfactory measure of sense of contribution in the workplace. The SCS provides a new and useful instrument to measure sense of contribution, which is independently associated with mental health in workers, for studies in organizational science, occupational health psychology and occupational medicine.
Salazar, Daniel; Rocco, Claudio M.; Galvan, Blas J.
This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature
Salazar, Daniel [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain) and Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: email@example.com; Rocco, Claudio M. [Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: firstname.lastname@example.org; Galvan, Blas J. [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain)]. E-mail: email@example.com
This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature.
Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der
The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...
Full Text Available The paper presents the problems of the reliability of the supply chain as a whole in the dependence on the reliability of its elements. Different variants of reserving of canals (prime and reserve ones and issues connected with their switching are discussed.
Condon, David; Revelle, William
Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.
Spick, Claudio; Szolar, Dieter H.M.; Preidler, Klaus W.; Tillich, Manfred; Reittner, Pia; Baltzer, Pascal A.
Highlights: • Breast MRI reliably excludes malignancy in conventional BI-RADS 0 cases (NPV: 100%). • Malignancy rate in the BI-RADS 0 population is substantial with 13.5%. • Breast MRI used as a problem-solving tool reliably excludes malignancy. - Abstract: Purpose: To evaluate the diagnostic performance of breast MRI if used as a problem-solving tool in BI-RADS 0 cases. Material and methods: In this IRB-approved, single-center study, 687 women underwent high-resolution-3D, dynamic contrast-enhanced breast magnetic resonance imaging (MRI) between January 2012 and December 2012. Of these, we analyzed 111 consecutive patients (mean age, 51 ± 12 years; range, 20–83 years) categorized as BI-RADS 0. Breast MRI findings were stratified by clinical presentations, conventional imaging findings, and breast density. MRI results were compared to the reference standard, defined as histopathology or an imaging follow-up of at least 1 year. Results: One hundred eleven patients with BI-RADS 0 conventional imaging findings revealed 30 (27%) mammographic masses, 57 (51.4%) mammographic architectural distortions, five (4.5%) mammographic microcalcifications, 17 (15.3%) ultrasound-only findings, and two palpable findings without imaging correlates. There were 15 true-positive, 85 true-negative, 11 false-positive, and zero false-negative breast MRI findings, resulting in a sensitivity, specificity, PPV, and NPV of 100% (15/15), 88.5% (85/96), 57.7% (15/26), and 100% (85/85), respectively. Breast density and reasons for referral had no significant influence on the diagnostic performance of breast MRI (p > 0.05). Conclusion: Breast MRI reliably excludes malignancy in conventional BI-RADS 0 cases resulting in a NPV of 100% (85/85) and a PPV of 57.7% (15/26)
Full Text Available This paper investigates a fuzzy portfolio selection problem with guaranteed reliability, in which the fuzzy variables are used to capture the uncertain returns of different securities. To effectively handle the fuzziness in a mathematical way, a new expected value operator and variance of fuzzy variables are defined based on the m λ measure that is a linear combination of the possibility measure and necessity measure to balance the pessimism and optimism in the decision-making process. To formulate the reliable portfolio selection problem, we particularly adopt the expected total return and standard variance of the total return to evaluate the reliability of the investment strategies, producing three risk-guaranteed reliable portfolio selection models. To solve the proposed models, an effective genetic algorithm is designed to generate the approximate optimal solution to the considered problem. Finally, the numerical examples are given to show the performance of the proposed models and algorithm.
Yeh, Wei-Chang; Bae, Changseok; Huang, Chia-Ling
Many real-world systems can be modeled as multi-state network systems in which reliability can be derived in terms of the lower bound points of level d, called d-minimal cuts (d-MCs). This study proposes a new method to find and verify obtained d-MCs with simple and useful found properties for the multi-state flow network reliability problem. The proposed algorithm runs in O(mσp) time, which represents a significant improvement over the previous O(mp 2 σ) time bound based on max-flow/min-cut, where p, σ and m denote the number of MCs, d-MC candidates and edges, respectively. The proposed algorithm also conquers the weakness of some existing methods, which failed to remove duplicate d-MCs in special cases. A step-by-step example is given to demonstrate how the proposed algorithm locates and verifies all d-MC candidates. As evidence of the utility of the proposed approach, we present extensive computational results on 20 benchmark networks in another example. The computational results compare favorably with a previously developed algorithm in the literature. - Highlights: • A new method is proposed to find all d-MCs for the multi-state flow networks. • The proposed method can prevent the generation of d-MC duplicates. • The proposed method is simpler and more efficient than the best-known algorithms
Hadi Heidari Gharehbolagh
Full Text Available This study investigates a multiowner maximum-flow network problem, which suffers from risky events. Uncertain conditions effect on proper estimation and ignoring them may mislead decision makers by overestimation. A key question is how self-governing owners in the network can cooperate with each other to maintain a reliable flow. Hence, the question is answered by providing a mathematical programming model based on applying the triangular reliability function in the decentralized networks. The proposed method concentrates on multiowner networks which suffer from risky time, cost, and capacity parameters for each network’s arcs. Some cooperative game methods such as τ-value, Shapley, and core center are presented to fairly distribute extra profit of cooperation. A numerical example including sensitivity analysis and the results of comparisons are presented. Indeed, the proposed method provides more reality in decision-making for risky systems, hence leading to significant profits in terms of real cost estimation when compared with unforeseen effects.
Yastrebenetskij, M.A.; Shvyryaev, Yu.V.; Spektor, L.I.; Nikonenko, I.V.
The problems of reliability standardization in computer-aided manufacturing of NPP units considering the following approaches: computer-aided manufacturing of NPP units as a part of automated technological complex; computer-aided manufacturing of NPP units as multi-functional system, are analyzed. Selection of the composition of reliability indeces for computer-aided manufacturing of NPP units for each of the approaches considered is substantiated
Full Text Available The paper deals with sensitivity and reliability applications to numerical studies of an off-shore platform model. Structural parameters and sea conditions are referred to the Baltic jack-up drilling platform. The sudy aims at the influence of particular basic variables on static and dynamic response as well as the probability of failure due to water waves and wind loads. The paper presents the sensitivity approach to a generalized eigenvalue problem and evaluation of the performace functions. The first order time-invariant problems of structural reliability analysis are under concern.
Huang, X N; Zhang, Y; Feng, W W; Wang, H S; Cao, B; Zhang, B; Yang, Y F; Wang, H M; Zheng, Y; Jin, X M; Jia, M X; Zou, X B; Zhao, C X; Robert, J; Jing, Jin
Objective: To evaluate the reliability and validity of warning signs checklist developed by the National Health and Family Planning Commission of the People's Republic of China (NHFPC), so as to determine the screening effectiveness of warning signs on developmental problems of early childhood. Method: Stratified random sampling method was used to assess the reliability and validity of checklist of warning sign and 2 110 children 0 to 6 years of age(1 513 low-risk subjects and 597 high-risk subjects) were recruited from 11 provinces of China. The reliability evaluation for the warning signs included the test-retest reliability and interrater reliability. With the use of Age and Stage Questionnaire (ASQ) and Gesell Development Diagnosis Scale (GESELL) as the criterion scales, criterion validity was assessed by determining the correlation and consistency between the screening results of warning signs and the criterion scales. Result: In terms of the warning signs, the screening positive rates at different ages ranged from 10.8%(21/141) to 26.2%(51/137). The median (interquartile) testing time for each subject was 1(0.6) minute. Both the test-retest reliability and interrater reliability of warning signs reached 0.7 or above, indicating that the stability was good. In terms of validity assessment, there was remarkable consistency between ASQ and warning signs, with the Kappa value of 0.63. With the use of GESELL as criterion, it was determined that the sensitivity of warning signs in children with suspected developmental delay was 82.2%, and the specificity was 77.7%. The overall Youden index was 0.6. Conclusion: The reliability and validity of warning signs checklist for screening early childhood developmental problems have met the basic requirements of psychological screening scales, with the characteristics of short testing time and easy operation. Thus, this warning signs checklist can be used for screening psychological and behavioral problems of early childhood
Ahmad Ali Eslami
Full Text Available Background: The main purpose of this study was to assess the factorial validity and reliability of the Iranian versions of the personality and behavior system scales (49 items of the AHDQ (The Adolescent Health and Development Questionnaire and interrelations among them based on Jessor′s PBT (Problem Behavior Theory. Methods: A multi-staged approach was employed. The cross-cultural adaptation was performed according to the internationally recommended methodology, using the following guidelines: translation, back-translation, revision by a committee, and pretest. After modifying and identifying of the best items, a cross-sectional study was conducted to assess the psychometric properties of Persian version using calibration and validation samples of adolescents. Also 113 of them completed it again two weeks later for stability. Results: The findings of the exploratory factor analysis suggested that the 7-factor solution with low self concept, emotional distress, general delinquency, cigarette, hookah, alcohol, and hard drugs use provided a better fitting model. The a range for these identified factors was 0.69 to 0.94, the ICC range was 0.73 to 0.93, and there was a significant difference in mean scores for these instruments in compare between the male normative and detention adolescents. The first and second-order measurement models testing found good model fit for the 7-factor model. Conclusions: Factor analyses provided support of existence internalizing and externalizing problem behavior syndrome. With those qualifications, this model can be applied for studies among Persian adolescents.
Edgren, Robert; Castrén, Sari; Mäkelä, Marjukka
This review aims to clarify which instruments measuring at-risk and problem gambling (ARPG) among youth are reliable and valid in light of reported estimates of internal consistency, classification accuracy, and psychometric properties. A systematic search was conducted in PubMed, Medline, and Psyc......Info covering the years 2009–2015. In total, 50 original research articles fulfilled the inclusion criteria: target age under 29 years, using an instrument designed for youth, and reporting a reliability estimate. Articles were evaluated with the revised Quality Assessment of Diagnostic Accuracy Studies tool....... Reliability estimates were reported for five ARPG instruments. Most studies (66%) evaluated the South Oaks Gambling Screen Revised for Adolescents. The Gambling Addictive Behavior Scale for Adolescents was the only novel instrument. In general, the evaluation of instrument reliability was superficial. Despite...
Staat, M [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Sicherheitsforschung und Reaktortechnik
It is shown that the difficulty for probabilistic fracture mechanics (PFM) is the general problem of the high reliability of a small population. There is no way around the problem as yet. Therefore what PFM can contribute to the reliability of steel pressure boundaries is demonstrated with the example of a typical reactor pressure vessel and critically discussed. Although no method is distinguishable that could give exact failure probabilities, PFM has several additional chances. Upper limits for failure probability may be obtained together with trends for design and operating conditions. Further, PFM can identify the most sensitive parameters, improved control of which would increase reliability. Thus PFM should play a vital role in the analysis of steel pressure boundaries despite all shortcomings. (author). 19 refs, 7 figs, 1 tab.
It is shown that the difficulty for probabilistic fracture mechanics (PFM) is the general problem of the high reliability of a small population. There is no way around the problem as yet. Therefore what PFM can contribute to the reliability of steel pressure boundaries is demonstrated with the example of a typical reactor pressure vessel and critically discussed. Although no method is distinguishable that could give exact failure probabilities, PFM has several additional chances. Upper limits for failure probability may be obtained together with trends for design and operating conditions. Further, PFM can identify the most sensitive parameters, improved control of which would increase reliability. Thus PFM should play a vital role in the analysis of steel pressure boundaries despite all shortcomings. (author). 19 refs, 7 figs, 1 tab
Full Text Available The paper deals with problems of using dependability terms, defined in actual standard STN IEC 50 (191: International electrotechnical dictionary, chap. 191: Dependability and quality of service (1993, in a technical systems dependability analysis. The goal of the paper is to find a relation between terms introduced in the mentioned standard and used in the technical systems dependability analysis and rules and practices used in a system analysis of the system theory. Description of a part of the system life cycle related to reliability is used as a starting point. The part of a system life cycle is described by the state diagram and reliability relevant therms are assigned.
Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong
This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
Kjeldsen, Lene Juel; Birkholm, Trine; Fischer, Hanne Lis
Background A drug related problems database (DRP-database) was developed on request by clinical pharmacists. The information from the DRP-database has only been used locally e.g. to identify focus areas and to communicate identified DRPs to the hospital wards. Hence the quality of the data...... by clinical pharmacists with categorization performed by the project group. Reproducibility was explored by re-categorization of a sample of existing records in the DRP-database by two project group members individually. Main outcome measures Observed proportion of agreement and Fleiss' kappa as measures...... reliability study of 34 clinical pharmacists showed high inter-rater reliability with the project group (Fleiss' kappa = 0.79 with 95 % CI (0.70; 0.88)), and the reproducibility study also documented high inter-rater reliability of a sample of 379 records from the DRP-database re-categorized by two project...
И. Ю. Семыкина
Full Text Available Restructuring of the energy sector and liberalization of the electricity market resulted in separation of a single industry into a multitude of generating companies, federal and interregional distribution utilities, regional power providers and energy suppliers. For a number of reasons, related to the process of managing separate companies, faults in the laws and regulations, and unsatisfactory technical state of the energy equipment both on the part of distributors and consumers, reliability of energy supply faces increasing negative trends, which potentially can lead to big problems. Up to this day the energy sector does not have a developed database on the state of equipment and results of its maintenance, nor has it defined criteria to actually assess technical conditions of the equipment. The authors propose to develop and implement a mechanism aimed at technical auditing and monitoring of the engineering state of external power supply system, whose results can help in the development of more efficient and economically sound measures to improve reliability of energy supply. In recent years, a lot of attention has been paid to the issues of enhanced security of industrial power supply. However, specific characteristics of underground coal mining and enormous work load of the production process limit the applicability of developed methods and algorithms. Existing research does not address economic issues of reliable energy supply, either direct (economic damage from power interruptions, contractual security of supply, tariff regulation or indirect (charges for utility connection with a required level of reliability. There is no explicit definition for the term «autonomous energy source», nor is there a list of power receivers falling into the first and «special» categories according to their reliability. The paper contains a range of urgent problems and solutions that will increase reliability of external power supply in the coal mines.
McIntyre, P.J.; Gibson, I.K.
Reliability data is collected for many reasons on a wide range of components and applications. Sometimes data is collected for a specific purpose whilst in other situations data may be collected simply to provide an available pool of historical data. Data can also be extracted from information that was gathered without recognition that it could be adapted for use as reliability data at a later stage. It is not surprising that there should be significant differences in the strengths and weaknesses of data obtained in such different circumstances. This paper describes work undertaken to investigate how to make best use of available data to provide specific and reliable predictions of valve reliability for nuclear power station applications. (orig.)
Machado, Paulo P P; Fassnacht, Daniel B
The Outcome Questionnaire (OQ-45) is one of the most extensively used standardized self-report instruments to monitor psychotherapy outcomes. The questionnaire is designed specifically for the assessment of change during psychotherapy treatments. Therefore, it is crucial to provide norms and clinical cut-off values for clinicians and researchers. The current study aims at providing study provides norms, reliability indices, and clinical cut-off values for the Portuguese version of the scale. Data from two large non-clinical samples (high school/university, N = 1,669; community, N = 879) and one clinical sample (n = 201) were used to investigate psychometric properties and derive normative data for all OQ-45 subscales and the total score. Significant and substantial differences were found for all subscales between the clinical and non-clinical sample. The Portuguese version also showed adequate reliabilities (internal consistency, test-retest), which were comparable to the original version. To assess individual clinical change, clinical cut-off values and reliable change indices were calculated allowing clinicians and researchers to monitor and evaluate clients' individual change. The Portuguese version of the OQ-45 is a reliable instrument with comparable Portuguese norms and cut-off scores to those from the original version. This allows clinicians and researchers to use this instrument for evaluating change and outcome in psychotherapy. This study provides norms for non-clinical and clinical Portuguese samples and investigates the reliability (internal consistency and test-retest) of the OQ-45. Cut-off values and reliable change index are provided allowing clinicians to evaluate clinical change and clients' response to treatment, monitoring the quality of mental health care services. These can be used, in routine clinical practice, as benchmarks for treatment progress and to empirically base clinical decisions such as continuation of treatment or considering
Erford, Bradley T.; Alsamadi, Silvana C.
Score reliability and validity of parent responses concerning their 10- to 17-year-old students were analyzed using the Screening Test for Emotional Problems-Parent Report (STEP-P), which assesses a variety of emotional problems classified under the Individuals with Disabilities Education Improvement Act. Score reliability, convergent, and…
Al'bertinskij, B.I.; Svin'in, M.P.; Tsepakin, S.G.
Statistical data characterizing the reliability of ELECTRON and AVRORA-2 type accelerators are presented. Used as a reliability index was the mean time to failure of the main accelerator units. The analysis of accelerator failures allowed a number of conclusions to be drawn. The high failure rate level is connected with inadequate training of the servicing personnel and a natural period of equipment adjustment. The mathematical analysis of the failure rate showed that the main responsibility for insufficient high reliability rests with selenium diodes which are employed in the high voltage power supply. Substitution of selenium diodes by silicon ones increases time between failures. It is shown that accumulation and processing of operational statistical data will permit more accurate prediction of the reliability of produced high-voltage accelerators, make it possible to cope with the problems of planning optimal, in time, preventive inspections and repair, and to select optimal safety factors and test procedures n time, preventive inspections and repair, and to select optimal safety factors and test procedures n time, prevent
Rougé, Charles; Mathias, Jean-Denis; Deffuant, Guillaume
The goal of this paper is twofold: (1) to show that time-variant reliability and a branch of control theory called stochastic viability address similar problems with different points of view, and (2) to demonstrate the relevance of concepts and methods from stochastic viability in reliability problems. On the one hand, reliability aims at evaluating the probability of failure of a system subjected to uncertainty and stochasticity. On the other hand, viability aims at maintaining a controlled dynamical system within a survival set. When the dynamical system is stochastic, this work shows that a viability problem belongs to a specific class of design and maintenance problems in time-variant reliability. Dynamic programming, which is used for solving Markovian stochastic viability problems, then yields the set of design states for which there exists a maintenance strategy which guarantees reliability with a confidence level β for a given period of time T. Besides, it leads to a straightforward computation of the date of the first outcrossing, informing on when the system is most likely to fail. We illustrate this approach with a simple example of population dynamics, including a case where load increases with time. - Highlights: • Time-variant reliability tools cannot devise complex maintenance strategies. • Stochastic viability is a control theory that computes a probability of failure. • Some design and maintenance problems are stochastic viability problems. • Used in viability, dynamic programming can find reliable maintenance actions. • Confronting reliability and control theories such as viability is promising
Fujiwara, Takeo; Yagi, Junko; Homma, Hiroaki; Mashiko, Hirobumi; Nagao, Keizo; Okuyama, Makiko
Background On March 11, 2011, a massive undersea earthquake and tsunami struck East Japan. Few studies have investigated the impact of exposure to a natural disaster on preschool children. We investigated the association of trauma experiences during the Great East Japan Earthquake on clinically significant behavior problems among preschool children 2 years after the earthquake. Method Participants were children who were exposed to the 2011 disaster at preschool age (affected area, n = 178; unaffected area, n = 82). Data were collected from September 2012 to June 2013 (around 2 years after the earthquake), thus participants were aged 5 to 8 years when assessed. Severe trauma exposures related to the earthquake (e.g., loss of family members) were assessed by interview, and trauma events in the physical environment related to the earthquake (e.g. housing damage), and other trauma exposure before the earthquake, were assessed by questionnaire. Behavior problems were assessed by caregivers using the Child Behavior Checklist (CBCL), which encompasses internalizing, externalizing, and total problems. Children who exceeded clinical cut-off of the CBCL were defined as having clinically significant behavior problems. Results Rates of internalizing, externalizing, and total problems in the affected area were 27.7%, 21.2%, and 25.9%, respectively. The rate ratio suggests that children who lost distant relatives or friends were 2.36 times more likely to have internalizing behavior problems (47.6% vs. 20.2%, 95% CI: 1.10–5.07). Other trauma experiences before the earthquake also showed significant positive association with internalizing, externalizing, and total behavior problems, which were not observed in the unaffected area. Conclusions One in four children still had behavior problems even 2 years after the Great East Japan Earthquake. Children who had other trauma experiences before the earthquake were more likely to have behavior problems. These data will be
Salonen, Anne H; Castrén, Sari; Alho, Hannu; Lahti, Tuuli
Problem gambling not only impacts those directly involved, but also the concerned significant others (CSOs) of problem gamblers. The aims of this study were to investigate the proportion of male and female CSOs at the population level; to investigate who the CSOs were concerned about; and to investigate sociodemographic factors, gender differences, gambling behaviour, and health and well-being among CSOs and non-CSOs. The data (n = 4484) were based on a cross-sectional population study. Structured telephone interviews were conducted in 2011-2012. The data were weighted based on age, gender and residency. The respondents were defined as CSOs if they reported that at least one of their significant others (father, mother, sister/brother, grandparent, spouse, own child/children, close friend) had had gambling problems. Statistical significance was determined by chi-squared and Fisher's exact tests, and logistic regression analysis. Altogether, 19.3% of the respondents were identified as CSOs. Most commonly, the problem gambler was a close friend (12.4%) of the CSO. The percentage of close friends having a gambling problem was larger among male CSOs (14.4%) compared with female CSOs (10.3%; p ≤ 0.001), while the percentage of partners with gambling problem was larger among females (2.6%) than among males (0.8%; p ≤ 0.001). In the best fitting model, the odds ratio (95% CI) of being a male CSO was 2.03 (1.24-3.31) for past-year gambling problems, 1.46 (1.08-1.97) for loneliness and 1.78 (1.38-2.29) for risky alcohol consumption. The odds ratio (95% CI) of being a female CSO was 1.51 (1.09-2.08) for past-year gambling involvement, 3.05 (1.18-7.90) for past-year gambling problems, 2.21 (1.24-3.93) for mental health problems, 1.39 (1.03-1.89) for loneliness and 1.97 (1.43-2.71) for daily smoking. CSOs of problem gamblers often experience cumulating problems such as their own risky gambling behaviour, health problems and other addictive disorders. The
He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi
A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Full Text Available Focusing on the first-best marginal cost pricing (MCP in a stochastic network with both travel demand uncertainty and stochastic perception errors within the travelers’ route choice decision processes, this paper develops a perceived risk-based stochastic network marginal cost pricing (PRSN-MCP model. Numerical examples based on an integrated method combining the moment analysis approach, the fitting distribution method, and the reliability measures are also provided to demonstrate the importance and properties of the proposed model. The main finding is that ignoring the effect of travel time reliability and travelers’ perception errors may significantly reduce the performance of the first-best MCP tolls, especially under high travelers’ confidence and network congestion levels. The analysis result could also enhance our understanding of (1 the effect of stochastic perception error (SPE on the perceived travel time distribution and the components of road toll; (2 the effect of road toll on the actual travel time distribution and its reliability measures; (3 the effect of road toll on the total network travel time distribution and its statistics; and (4 the effect of travel demand level and the value of reliability (VoR level on the components of road toll.
Taboada, Heidi A.; Baheranwala, Fatema; Coit, David W.; Wattanapongsakorn, Naruemon
For multiple-objective optimization problems, a common solution methodology is to determine a Pareto optimal set. Unfortunately, these sets are often large and can become difficult to comprehend and consider. Two methods are presented as practical approaches to reduce the size of the Pareto optimal set for multiple-objective system reliability design problems. The first method is a pseudo-ranking scheme that helps the decision maker select solutions that reflect his/her objective function priorities. In the second approach, we used data mining clustering techniques to group the data by using the k-means algorithm to find clusters of similar solutions. This provides the decision maker with just k general solutions to choose from. With this second method, from the clustered Pareto optimal set, we attempted to find solutions which are likely to be more relevant to the decision maker. These are solutions where a small improvement in one objective would lead to a large deterioration in at least one other objective. To demonstrate how these methods work, the well-known redundancy allocation problem was solved as a multiple objective problem by using the NSGA genetic algorithm to initially find the Pareto optimal solutions, and then, the two proposed methods are applied to prune the Pareto set
In the U.S.A. over the past few months, widespread plant shutdowns because of cracking problems has produced considerable public pressure for a reappraisal of the reliability and safety of nuclear reactors. The awareness of such problems, and their solution, is particularly relevant to South Africa at this time. Some materials problems related to nuclear plant failure are examined in this paper. Since catastrophic failure (without prior warning from slow leakage) is in principle possible for light water (pressurised) reactors under operating conditions, it is essential to maintain rigorous manufacturing and quality control procedures, in conjunction with thorough and frequent examination by non-destructive testing methods. Although tests currently in progress in the U.S.A. on large-scale model reactors suggest that mathematical stress and failure analyses, for simple geometries at least, are sound, current in situ surveillance programmes aimed at categorizing the effects of irradiation are inadequate. In addition, the effects on materials properties and subsequent fracture resistance of the combined effects of irradiation and thermal shock (arising from the injection of emergency cooling water during a loss-of coolant accident) are unknown. The problem of stress corrosion cracking in stainless steel pipelines is considerable, and at present virtually impossible to predict. Much of the available laboratory data is inapplicable in that it cannot account for the complex interactions of stress state, temperature, material variations and segregation effects, and water chemistry, especially in conjunction with irradiation effects, that are experienced in an operating environment
Work on the project was divided into three tasks. In Task 1, past surveys of LWR piping system problems and recent Licensee Event Report summaries are studied to identify the significant problems of LWR piping systems and the primary causes of these problems. Pipe cracking is identified as the most recurring problem and is mainly due to the vibration of pipes due to operating pump-pipe resonance, fluid-flow fluctuations, and vibration of pipe supports. Research relevant to the identified piping system problems is evaluated. Task 2 studies identify typical LWR piping systems and the current loads and load combinations used in the design of these systems. Definitions of loads are reviewed. In Task 3, a comparative study is carried out on the use of nonlinear analysis methods in the design of LWR piping systems. The study concludes that the current linear-elastic methods of analysis may not predict accurately the behavior of piping systems under seismic loads and may, under certain circumstances, result in nonconservative designs. Gaps at piping supports are found to have a significant effect on the response of the piping systems
This paper proposes a new swarm intelligence method known as the Particle-based Simplified Swarm Optimization (PSSO) algorithm while undertaking a modification of the Updating Mechanism (UM), called N-UM and R-UM, and simultaneously applying an Orthogonal Array Test (OA) to solve reliability–redundancy allocation problems (RRAPs) successfully. One difficulty of RRAP is the need to maximize system reliability in cases where the number of redundant components and the reliability of corresponding components in each subsystem are simultaneously decided with nonlinear constraints. In this paper, four RRAP benchmarks are used to display the applicability of the proposed PSSO that advances the strengths of both PSO and SSO to enable optimizing the RRAP that belongs to mixed-integer nonlinear programming. When the computational results are compared with those of previously developed algorithms in existing literature, the findings indicate that the proposed PSSO is highly competitive and performs well. - Highlights: • This paper proposes a particle-based simplified swarm optimization algorithm (PSSO) to optimize RRAP. • Furthermore, the UM and an OA are adapted to advance in optimizing RRAP. • Four systems are introduced and the results demonstrate the PSSO performs particularly well
Full Text Available There is a growing interest in finding a global optimal path in transportation networks particularly when the network suffers from unexpected disturbance. This paper studies the problem of finding a global optimal path to guarantee a given probability of arriving on time in a network with uncertainty, in which the travel time is stochastic instead of deterministic. Traditional path finding methods based on least expected travel time cannot capture the network user’s risk-taking behaviors in path finding. To overcome such limitation, the reliable path finding algorithms have been proposed but the convergence of global optimum is seldom addressed in the literature. This paper integrates the K-shortest path algorithm into Backtracking method to propose a new path finding algorithm under uncertainty. The global optimum of the proposed method can be guaranteed. Numerical examples are conducted to demonstrate the correctness and efficiency of the proposed algorithm.
Diamantidis A. C.
Full Text Available In this study, the buffer allocation problem (BAP in homogeneous, asymptotically reliable serial production lines is considered. A known aggregation method, given by Lim, Meerkov, and Top (1990, for the performance evaluation (i.e., estimation of throughput of this type of production lines when the buffer allocation is known, is used as an evaluative method in conjunction with a newly developed dynamic programming (DP algorithm for the BAP. The proposed algorithm is applied to production lines where the number of machines is varying from four up to a hundred machines. The proposed algorithm is fast because it reduces the volume of computations by rejecting allocations that do not lead to maximization of the line's throughput. Numerical results are also given for large production lines.
Full Text Available BACKGROUND: The KIPPPI (Brief Instrument Psychological and Pedagogical Problem Inventory is a Dutch questionnaire that measures psychosocial and pedagogical problems in 2-year olds and consists of a KIPPPI Total score, Wellbeing scale, Competence scale, and Autonomy scale. This study examined the reliability, validity, screening accuracy and clinical application of the KIPPPI. METHODS: Parents of 5959 2-year-old children in the Rotterdam area, the Netherlands, were invited to participate in the study. Parents of 3164 children (53.1% of all invited parents completed the questionnaire. The internal consistency was evaluated and in subsamples the test-retest reliability and concurrent validity with regard to the Child Behavioral Checklist (CBCL. Discriminative validity was evaluated by comparing scores of parents who worried about their child's upbringing and parent's that did not. Screening accuracy of the KIPPPI was evaluated against the CBCL by calculating the Receiver Operating Characteristic (ROC curves. The clinical application was evaluated by the relation between KIPPPI scores and the clinical decision made by the child health professionals. RESULTS: Psychometric properties of the KIPPPI Total score, Wellbeing scale, Competence scale and Autonomy scale were respectively: Cronbach's alphas: 0.88, 0.86, 0.83, 0.58. Test-retest correlations: 0.80, 0.76, 0.73, 0.60. Concurrent validity was as hypothesised. The KIPPPI was able to discriminate between parents that worried about their child and parents that did not. Screening accuracy was high (>0.90 for the KIPPPI Total score and for the Wellbeing scale. The KIPPPI scale scores and clinical decision of the child health professional were related (p<0.05, indicating a good clinical application. CONCLUSION: The results in this large-scale study of a diverse general population sample support the reliability, validity and clinical application of the KIPPPI Total score, Wellbeing scale and Competence
Authors such as Tony Hope and Julian Savulescu appeal to Derek Parfit's non-identity problem in relation to particular questions in applied ethics, and particularly in reproductive ethics. They argue that the non-identity problem shows that an individual cannot be harmed by being brought into existence, and therefore, we cannot say that the individual is harmed if, for example, we select an embryo in order to have a deaf child. Thus, they argue that an appeal to the non-identity problem blocks (or significantly reduces the force of) objections in a number of cases. I argue that these discussions often give the impression that this is a clear conclusion, shared by most philosophers, and largely beyond dispute. This is particularly significant because these discussions are often in journals or books with an interdisciplinary readership. My concern is that they give the impression of stating: 'philosophers have studied this issue, and this is the conclusion they have reached. Now I will emphasise the implications for medical ethics'. I argue that, far from being the consensus view, the view presented by Hope and Savulescu is rejected by many, including Parfit himself. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Facing the new demands of the optical fiber communications market, almost all the performance and reliability of optical network system are dependent on the qualification of the fiber optics components. So, how to comply with the system requirements, the Telcordia / Bellcore reliability and high-power testing has become the key issue for the fiber optics components manufacturers. The qualification of Telcordia / Bellcore reliability or high-power testing is a crucial issue for the manufacturers. It is relating to who is the outstanding one in the intense competition market. These testing also need maintenances and optimizations. Now, work on the reliability and high-power testing have become the new demands in the market. The way is needed to get the 'Triple-Win' goal expected by the component-makers, the reliability-testers and the system-users. To those who are meeting practical problems for the testing, there are following seven topics that deal with how to shoot the common mistakes to perform qualify reliability and high-power testing: ¸ Qualification maintenance requirements for the reliability testing ¸ Lots control for preparing the reliability testing ¸ Sampling select per the reliability testing ¸ Interim measurements during the reliability testing ¸ Basic referencing factors relating to the high-power testing ¸ Necessity of re-qualification testing for the changing of producing ¸ Understanding the similarity for product family by the definitions
Osório, G.J.; Lujano-Rojas, J.M.; Matias, J.C.O.; Catalão, J.P.S.
Highlights: • A model to the scheduling of power systems with significant renewable power generation is provided. • A new methodology that takes information from the analysis of each scenario separately is proposed. • Based on a probabilistic analysis, unit scheduling and corresponding economic dispatch are estimated. • A comparison with others methodologies is in favour of the proposed approach. - Abstract: Optimal operation of power systems with high integration of renewable power sources has become difficult as a consequence of the random nature of some sources like wind energy and photovoltaic energy. Nowadays, this problem is solved using Monte Carlo Simulation (MCS) approach, which allows considering important statistical characteristics of wind and solar power production such as the correlation between consecutive observations, the diurnal profile of the forecasted power production, and the forecasting error. However, MCS method requires the analysis of a representative amount of trials, which is an intensive calculation task that increases considerably with the number of scenarios considered. In this paper, a model to the scheduling of power systems with significant renewable power generation based on scenario generation/reduction method, which establishes a proportional relationship between the number of scenarios and the computational time required to analyse them, is proposed. The methodology takes information from the analysis of each scenario separately to determine the probabilistic behaviour of each generator at each hour in the scheduling problem. Then, considering a determined significance level, the units to be committed are selected and the load dispatch is determined. The proposed technique was illustrated through a case study and the comparison with stochastic programming approach was carried out, concluding that the proposed methodology can provide an acceptable solution in a reduced computational time
Using criminal law powers to respond to people living with HIV (PHAs) who expose sexual partners to HIV or transmit the virus to them is a prominent global HIV public policy issue. While there are widespread concerns about the public health impact of HIV-related criminalization, the social science literature on the topic is limited. This article responds to that gap in knowledge by reporting on the results of qualitative research conducted with service providers and PHAs in Canada. The article draws on a studies in the social organization of knowledge perspective and insights from critical criminology and work on the "medico-legal borderland." It investigates the role played by the legal concept of "significant risk" in coordinating criminal law governance and its interface with public health and HIV prevention. In doing so, the article emphasizes that exploring the public health impact of criminalization must move past the criminal law--PHA dyad to address broader social and institutional processes relevant to HIV prevention. Drawing on individual and focus group interviews, this article explores how criminal law governance shapes the activities of providers engaged in HIV prevention counseling, conceptualized as a complex of activities linking clinicians, public health officials, front-line counselors, PHAs, and others. It emphasizes three key findings: (1) the concept of significant risk poses serious problems to risk communication in HIV counseling and contributes to contradictory advice about disclosure obligations; (2) criminalization discourages PHAs' openness about HIV non-disclosure in counseling relationships; and (3) the recontextualization of public health interpretations of significant risk in criminal proceedings can intensify criminalization. Copyright © 2011 Elsevier Ltd. All rights reserved.
Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...
Erford, Bradley T.; Butler, Caitlin; Peacock, Elizabeth
The Screening Test for Emotional Problems-Teacher Version (STEP-T) was designed to identify students aged 7-17 years with wide-ranging emotional disturbances. Coefficients alpha and test-retest reliability were adequate for all subscales except Anxiety. The hypothesized five-factor model fit the data very well and external aspects of validity were…
Eshoj, Henrik; Ingwersen, Kim Gordon; Larsen, Camilla Marie; Kjaer, Birgitte Hougs; Juul-Kristensen, Birgit
First, to investigate the intertester reliability of clinical shoulder instability and laxity tests, and second, to describe the mutual dependency of each test evaluated by each tester for identifying self-reported shoulder instability and laxity. A standardised protocol for conducting reliability studies was used to test the intertester reliability of the six clinical shoulder instability and laxity tests: apprehension, relocation, surprise, load-and-shift, sulcus sign and Gagey. Cohen's kappa (κ) with 95% CIs besides prevalence-adjusted and bias-adjusted kappa (PABAK), accounting for insufficient prevalence and bias, were computed to establish the intertester reliability and mutual dependency. Forty individuals (13 with self-reported shoulder instability and laxity-related shoulder problems and 27 normal shoulder individuals) aged 18-60 were included. Fair (relocation), moderate (load-and-shift, sulcus sign) and substantial (apprehension, surprise, Gagey) intertester reliability were observed across tests (κ 0.39-0.73; 95% CI 0.00 to 1.00). PABAK improved reliability across tests, resulting in substantial to almost perfect intertester reliability for the apprehension, surprise, load-and-shift and Gagey tests (κ 0.65-0.90). Mutual dependencies between each test and self-reported shoulder problem showed apprehension, relocation and surprise to be the most often used tests to characterise self-reported shoulder instability and laxity conditions. Four tests (apprehension, surprise, load-and-shift and Gagey) out of six were considered intertester reliable for clinical use, while relocation and sulcus sign tests need further standardisation before acceptable evidence. Furthermore, the validity of the tests for shoulder instability and laxity needs to be studied. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Todorović Miodrag; Radovanović Olica
Artificial abortion is a very important social-medical, economic and demographic problem. It is not only a problem of public health (disease disability, sterility) and social economy (to lose income and compensation because of absenteeism, increase of expense in health care sector for the treatment of direct, early and late consequences and sterility). It is a very important demographic problem because of the increase in "unrealized fertilities" and lost of descendents. According to the regis...
Pac, A; Oruba, Z; Olszewska-Czyż, I; Chomyszyn-Gajewska, M
The individual evaluation of patients' motivation should be introduced to the protocol of periodontal treatment, as it could impact positively on effective treatment planning and treatment outcomes. However, a standardised tool measuring the extent of periodontal patients' motivation has not yet been proposed in the literature. Thus, the objective of the present study was to determine the validity and reliability of the Zychlińscy motivation scale adjusted to the needs of periodontology. Cross sectional study. Department of Periodontology and Oral Medicine, Dental University Clinic, Jagiellonian University, Krakow, Poland. 199 adult periodontal patients, aged 20-78. 14-item questionnaire. The items were adopted from the original Zychlińscy motivation assessment scale. Validity and reliability of the proposed motivation assessment instrument. The assessed Cronbach's alpha of 0.79 indicates the scale is a reliable tool. Principal component analysis revealed a model with three factors, which explained half of the total variance. Those factors represented: the patient's attitude towards treatment and oral hygiene practice; previous experiences during treatment; and the influence of external conditions on the patient's attitude towards treatment. The proposed scale proved to be a reliable and accurate tool for the evaluation of periodontal patients' motivation.
Shima MohammadZadeh Dogahe
Full Text Available A novel integrated model is proposed to optimize the redundancy allocation problem (RAP and the reliability-centered maintenance (RCM simultaneously. A system of both repairable and nonrepairable components has been considered. In this system, electronic components are nonrepairable while mechanical components are mostly repairable. For nonrepairable components, a redundancy allocation problem is dealt with to determine optimal redundancy strategy and number of redundant components to be implemented in each subsystem. In addition, a maintenance scheduling problem is considered for repairable components in order to identify the best maintenance policy and optimize system reliability. Both active and cold standby redundancy strategies have been taken into account for electronic components. Also, net present value of the secondary cost including operational and maintenance costs has been calculated. The problem is formulated as a biobjective mathematical programming model aiming to reach a tradeoff between system reliability and cost. Three metaheuristic algorithms are employed to solve the proposed model: Nondominated Sorting Genetic Algorithm (NSGA-II, Multiobjective Particle Swarm Optimization (MOPSO, and Multiobjective Firefly Algorithm (MOFA. Several test problems are solved using the mentioned algorithms to test efficiency and effectiveness of the solution approaches and obtained results are analyzed.
Full Text Available After natural disaster, especially for large-scale disasters and affected areas, vast relief materials are often needed. In the meantime, the traffic networks are always of uncertainty because of the disaster. In this paper, we assume that the edges in the network are either connected or blocked, and the connection probability of each edge is known. In order to ensure the arrival of these supplies at the affected areas, it is important to select a reliable path. A reliable path selection model is formulated, and two algorithms for solving this model are presented. Then, adjustable reliable path selection model is proposed when the edge of the selected reliable path is broken. And the corresponding algorithms are shown to be efficient both theoretically and numerically.
M.E. Che Munaaim; M.S. Mohd Danuri; H. Abdul-Rahman
Some developed countries have drawn lip construction-specific statutory security of payment acts/legislations typically known as Construction Contracts Act to eliminate poor payment practices and to assist continuous uninterrupted construction works. Malaysia too cannot pretend not to have these problems. This paper presents findings of a study conducted amongst Malaysian contractors with the aims to determine the seriousness of late and non- payment problems; to identify the main causes and ...
Atik Altınok, Yasemin; Özgür, Suriye; Meseri, Reci; Özen, Samim; Darcan, Şükran; Gökşen, Damla
The aim of this study was to show the reliability and validity of a Turkish version of Diabetes Eating Problem Survey-Revised (DEPS-R) in children and adolescents with type 1 diabetes mellitus. A total of 200 children and adolescents with type 1 diabetes, ages 9-18 years, completed the DEPS-R Turkish version. In addition to tests of validity, confirmatory factor analysis was conducted to investigate the factor structure of the 16-item Turkish version of DEPS-R. The Turkish version of DEPS-R demonstrated satisfactory Cronbach's ∝ (0.847) and was significantly correlated with age (r=0.194; p1), hemoglobin A1c levels (r=0.303; p1), and body mass index-standard deviation score (r=0.412; p1) indicating criterion validity. Median DEPS-R scores of Turkish version for the total samples, females, and males were 11.0, 11.5, and 10.5, respectively. Disturbed eating behaviors and insulin restriction were associated with poor metabolic control. A short, self-administered diabetes-specific screening tool for disordered eating behavior can be used routinely in the clinical care of adolescents with type 1 diabetes. The Turkish version of DEPS-R is a valid screening tool for disordered eating behaviors in type 1 diabetes and it is potentially important to early detect disordered eating behaviors.
Sobral, Maria P.; Costa, Maria E.; Schmidt, Lone
comparability of fertility-related stress across genders and countries. STUDY DESIGN SIZE, DURATION Cross-sectional study. First, we tested the structure of the COMPI-FPSS. Then, reliability and validity (convergent and discriminant) were examined for the final model. Finally, measurement invariance both across...... genders and cultures was tested. PARTICIPANTS/MATERIALS, SETTING, METHODS Our final sample had 3923 fertility patients (1691 men and 2232 women) recruited in clinical settings from seven different countries: Denmark, China, Croatia, Germany, Greece, Hungary and Sweden. Participants had a mean age of 34......STUDY QUESTION Are the Copenhagen Multi‐Centre Psychosocial Infertility research program Fertility Problem Stress Scales (COMPI-FPSS) a reliable and valid measure across gender and culture? SUMMARY ANSWER The COMPI-FPSS is a valid and reliable measure, presenting excellent or good fit...
Homke, P.; Kutsch, W.; Lindauer, E.
This paper gives a survey on reliability data assessment in the FRG. The activities, which were carried out for the German Risk Assessment Study are presented together with selected results. A systematic data collection in a nuclear power plant is described and the experiences are discussed, which were gained in this project
M.E. Che Munaaim
Full Text Available Some developed countries have drawn lip construction-specific statutory security of payment acts/legislations typically known as Construction Contracts Act to eliminate poor payment practices and to assist continuous uninterrupted construction works. Malaysia too cannot pretend not to have these problems. This paper presents findings of a study conducted amongst Malaysian contractors with the aims to determine the seriousness of late and non- payment problems; to identify the main causes and effects of late and non-payment; and to identify ways to sustain the payment flows in the Malaysian construction industry. The study focused on contractual payments from the paymaster (government or private to the contractors. The main factors for late and nonpayment in the construction industry identified from the study include: delay in certification, paymaster's poor financial management, local culture/attitude, pay master's failure to implement good governance in business, underpayment of certified amounts by the pay master and the use of 'pay when paid' clauses in contracts. The research findings show that late and non-payment can create cash flow problems, stress and financial hardship on the contractors. Amongst the most appropriate solutions to overcome the problem of late and non-payment faced by local contractors include: a right to regular periodic payment, a right to a defined time frame (or payment and a right to a speedy dispute resolution mechanism. Promptness of submitting, processing, issuing interim payment certificates and honouring the certificates are extremely important issues in relation to progress payment claims. Perhaps, an increased sense of professionalism in construction industry could overcome some of the problems related to late and non- payment issues.
Back, Ki-Joon; Williams, Robert J; Lee, Choong-Ki
Most research on the assessment, epidemiology, and treatment of problem gambling has occurred in Western jurisdictions. This potentially limits the cross-cultural validity of problem gambling assessment instruments as well as etiological models of problem gambling. The primary objective of the present research was to investigate the reliability and validity of three problem gambling assessment instruments within a South Korean context. A total of 4,330 South Korean adults participated in a comprehensive assessment of their gambling behavior that included the administration of the DSM-IV criteria for pathological gambling (NODS), the Canadian Problem Gambling Index (CPGI), and the Problem and Pathological Gambling Measure (PPGM). Cronbach alpha showed that all three instruments had good internal consistency. Concurrent validity was established by the significant associations observed between scores on the instruments and measures of gambling involvement (number of gambling formats engaged in; frequency of gambling; and gambling expenditure). Most importantly, kappa statistics showed that all instruments have satisfactory classification accuracy against clinical assessment of problem gambling conducted by South Korean clinicians (NODS κ = .66; PPGM κ = .62; CPGI κ = .51). These results confirm that Western-derived operationalizations of problem gambling have applicability in a South Korean setting.
I. Kruizinga (Ingrid); W. Jansen (Wilma); C.L. de Haan (Carolien); H. Raat (Hein)
textabstractBackground: The KIPPPI (Brief Instrument Psychological and Pedagogical Problem Inventory) is a Dutch questionnaire that measures psychosocial and pedagogical problems in 2-year olds and consists of a KIPPPI Total score, Wellbeing scale, Competence scale, and Autonomy scale. This study
Salonen, Anne H; Alho, Hannu; Castrén, Sari
This study compares past-year gambling frequency, gambling problems and concerned significant others (CSOs) of problem gamblers in Finland by age, from 2007 and 2011. We used random sample data collected in 2007 (n = 4722) and 2011 (n = 4484). The data were weighted, based on gender, age and region of residence. We measured the past-year gambling frequency using a categorical variable, while gambling severity was measured with the South Oaks Gambling Screen. We identified CSOs by a single question including seven response options. Chi-Squared and Fisher's exact tests were used. Overall, the past-year gambling frequency change was statistically significant between 2007 and 2011. Among 18-64-year-old Finnish people, the proportion of non-gamblers decreased. Yet, among 15-17-year-old respondents, non-gambling increased and gambling problems decreased. Among 18-24 year olds, the proportion of close ones with gambling problems also decreased. On the other hand, the proportion of family members with gambling problems increased among the 50-64 year olds. The increase in adult gambling participation was mainly explained by infrequent gambling. The proportion of gambling problems from the gamblers' and CSOs' perspective remained unchanged, yet significant changes were observed within age groups. The short-term changes in under-age gambling problems were desirable. Future studies should explore the adaptation and access hypotheses alongside gambling problems. © 2015 the Nordic Societies of Public Health.
For the investigation of the risk of nuclear power plants loss-of-coolant accidents and transients have to be analyzed. The different functions of the engineered safety features installed to cope with transients are explained. The event tree analysis is carried out for the important transient 'loss of normal onsite power'. Preliminary results of the reliability analyses performed for quantitative evaluation of this event tree are shown. (orig.) [de
Limbourg, Philipp; Kochs, Hans-Dieter
Reliability optimization problems such as the redundancy allocation problem (RAP) have been of considerable interest in the past. However, due to the restrictions of the design space formulation, they may not be applicable in all practical design problems. A method with high modelling freedom for rapid design screening is desirable, especially in early design stages. This work presents a novel approach to reliability optimization. Feature modelling, a specification method originating from software engineering, is applied for the fast specification and enumeration of complex design spaces. It is shown how feature models can not only describe arbitrary RAPs but also much more complex design problems. The design screening is accomplished by a multi-objective evolutionary algorithm for probabilistic objectives. Comparing averages or medians may hide the true characteristics of this distributions. Therefore the algorithm uses solely the probability of a system dominating another to achieve the Pareto optimal set. We illustrate the approach by specifying a RAP and a more complex design space and screening them with the evolutionary algorithm
Villesen, Christine; Hojsted, Jette; Kjeldsen, Lene Juel
to a mutual agreement on the level of clinical significance. However, to what degree does the panel agree?Purpose To compare the agreement between different health care professionals who have evaluated the clinical significance of DRPs.Materials and methods DRPs were identified in 30 comprehensive medicines...... reviews conducted by a clinical pharmacist. Two hospital pharmacists, a general practitioner and two specialists in pain management from hospital care (the Panel) evaluated each DRP considering the potential clinical outcome for the patient. The DRPs were rated either nil, low, minor, moderate or highly...... clinically significant. Agreement was analysed using Kappa statistics. A Kappa value of 0.8 to 1.0 indicated nearly perfect agreement between ratings of the Panel members.Results The Panel rated 45 percent of the total 162 DRPs as of moderate clinical significance. However, the overall kappa score was 0...
Tamura, Yoshinobu; Yamada, Shigeru
OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.
Sengupta Raghu Nandan
Full Text Available We solve a linear chance constrained portfolio optimization problem using Robust Optimization (RO method wherein financial script/asset loss return distributions are considered as extreme valued. The objective function is a convex combination of portfolio’s CVaR and expected value of loss return, subject to a set of randomly perturbed chance constraints with specified probability values. The robust deterministic counterpart of the model takes the form of Second Order Cone Programming (SOCP problem. Results from extensive simulation runs show the efficacy of our proposed models, as it helps the investor to (i utilize extensive simulation studies to draw insights into the effect of randomness in portfolio decision making process, (ii incorporate different risk appetite scenarios to find the optimal solutions for the financial portfolio allocation problem and (iii compare the risk and return profiles of the investments made in both deterministic as well as in uncertain and highly volatile financial markets.
Schuit, Ewoud; Roes, Kit C B; Mol, Ben W J; Kwee, Anneke; Moons, Karel G M; Groenwold, Rolf H H
BACKGROUND: Meta-analyses are typically triggered by a (potentially false-significant) finding in one of the preceding primary studies. We studied consequences of meta-analysis investigating effects when primary studies that triggered such meta-analysis are also included. METHODS: We analytically
Malenchenko, A.F.; Mironov, V.P.
The data on actual wastes of nuclear-power plants, environmental distribution and biological effects of iodine radioactive isotopes have been analyzed. Dose-response relationship is estimated as well as its significance for struma maligna development under ionizing radiation and the contribution of iodine radionuclides resulted from nuclear power engineering to this process
Morita, Akio; Teraoka, Akira
Dynamic CT scan is a very useful method for the diagnosis of cerebral infarctions and other ischemic disorders. We have used this method for 1) the ultra-early stage diagnosis of major infarctions, 2) the detection of the recanalization and the disruption of the blood-brain barrier, and 3) the detection of latent ischemic lesions. In this report we discussed the clinical cases and the usual use of this dynamic CT scan. We used a GE CT/T8800 scanner for dynamic CT scanning. Manual bolus-contrast-medium injection was done simultaneously with the first scanning, and 6 sequential scannings (scan time: 4.8 s; scan interval: 1.4 s) were done on the same slice level. Especially in major infarctions (e.g., MCA occlusion), OM 40 was the most preferred slice. In cases of ultra-early stage infarctions (i.e., no abnormal lesions in non-enhanced CT), we used this dynamic CT scan immediately after the non-enhanced CT; we could thus obtain information on the ischemic lesions and the ischemic degree. After that we repeated this examination on Days 3, 7, and 14 for the evaluation of the recanalization and blood-brain-barrier disruption. In the cases of TIA and impending or progressing strokes, dynamic CT scan could disclose latent ischemic lesions; in there instances, we treated the patients with intensive to prevent the prognosis from worsening. These benefits and also some problems were discussed. (author)
Yamazaki, Seiichiro; Shimada, Michiya.
For a divertor plate in a fusion power reactor, a high temperature coolant must be used for heat removal to keep thermal efficiency high. It makes the temperature and thermal stress of wall materials higher than the design limits. Issues of the coolant itself, e.g. burnout of high temperature water, will also become a serious problem. Sputtering erosion of the surface material will be a great concern of its lifetime. Therefore, it is necessary to reduce the heat and particle loads to the divertor plate technologically. The feasibility of some technological methods of heat reduction, such as separatrix sweeping, is discussed. As one of the most promising ideas, the methods of radiative cooling of the divertor plasma are summarized based on the recent results of large tokamaks. The feasibility of remote radiative cooling and gas divertor is discussed. The ideas are considered in recent design studies of tokamak power reactors and experimental reactors. By way of example, conceptual designs of divertor plate for the steady state tokamak power reactor are described. (author)
Shi, Jing; Ausloos, Marcel; Zhu, Tingting
We discuss a common suspicion about reported financial data, in 10 industrial sectors of the 6 so called "main developing countries" over the time interval [2000-2014]. These data are examined through Benford's law first significant digit and through distribution distances tests. It is shown that several visually anomalous data have to be a priori removed. Thereafter, the distributions much better follow the first digit significant law, indicating the usefulness of a Benford's law test from the research starting line. The same holds true for distance tests. A few outliers are pointed out.
Long, Kim Chenming
Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this
Donnelly, Lane F
Deploying an intentional daily management process is a key part to create high-reliability culture. Key components described in the literature for a successfully daily management process include leadership standard work, visual controls, daily accountability processes, and the discipline to stick to the process over the long term. We believe that the institution of a daily readiness huddle has helped us better coordinate and communicate as a department and improved our ability to deliver imaging services on a daily basis. The daily readiness huddle has enabled us to more rapidly identify issues and has brought accountability to seeing solutions to those issues brought to fruition. In addition, it has helped with team building, including between the radiologists and the nonphysician staff. Copyright © 2017 Elsevier Inc. All rights reserved.
Newton, Nigel J; Mitter, Sanjoy K
We consider a family of estimation problems not admitting conventional analysis because of singularity and measurability issues. We define posterior distributions for the family by a variational technique analogous to that used to define Gibbs measures in statistical mechanics. The family of estimation problems, which arise in the asymptotic analysis of error-control codes, is parametrized by a code rate, R∈(0,∞); this is shown to be analogous to the absolute temperature of statistical mechanics. The family undergoes an (Ehrenfest) first-order phase transition at a critical code rate C (the channel capacity), where there is a convex set of posterior distributions. At all other code rates, there is only one posterior distribution; if R C it has infinite support. In a result reflecting the Dobrushin construction, we show that these posterior distributions are asymptotically consistent with those of families of finite-sequence error-control codes. (paper)
Kjersti Marie Blytt
Full Text Available Abstract Background Sleep disturbances are widespread among nursing home (NH patients and associated with numerous negative consequences. Identifying and treating them should therefore be of high clinical priority. No prior studies have investigated the degree to which sleep disturbances as detected by actigraphy and by the sleep-related items in the Cornell Scale for Depression in Dementia (CSDD and the Neuropsychiatric Inventory – Nursing Home version (NPI-NH provide comparable results. Such knowledge is highly needed, since both questionnaires are used in clinical settings and studies use the NPI-NH sleep item to measure sleep disturbances. For this reason, insight into their relative (disadvantages is valuable. Method Cross-sectional study of 83 NH patients. Sleep was objectively measured with actigraphy for 7 days, and rated by NH staff with the sleep items in the CSDD and the NPI-NH, and results were compared. McNemar's tests were conducted to investigate whether there were significant differences between the pairs of relevant measures. Cohen's Kappa tests were used to investigate the degree of agreement between the pairs of relevant actigraphy, NPI-NH and CSDD measures. Sensitivity and specificity analyses were conducted for each of the pairs, and receiver operating characteristics (ROC curves were designed as a plot of the true positive rate against the false positive rate for the diagnostic test. Results Proxy-raters reported sleep disturbances in 20.5% of patients assessed with NPI-NH and 18.1% (difficulty falling asleep, 43.4% (multiple awakenings and 3.6% (early morning awakenings of patients had sleep disturbances assessed with CSDD. Our results showed significant differences (p<0.001 between actigraphy measures and proxy-rated sleep by the NPI-NH and CSDD. Sensitivity and specificity analyses supported these results. Conclusions Compared to actigraphy, proxy-raters clearly underreported NH patients' sleep disturbances as assessed
Yang, Yi; Bae, Junghan; Kim, Junbeum; Suh, Sangwon
Previous studies on the life-cycle environmental impacts of corn ethanol and gasoline focused almost exclusively on energy balance and greenhouse gas (GHG) emissions and largely overlooked the influence of regional differences in agricultural practices. This study compares the environmental impact of gasoline and E85 taking into consideration 12 different environmental impacts and regional differences among 19 corn-growing states. Results show that E85 does not outperform gasoline when a wide spectrum of impacts is considered. If the impacts are aggregated using weights developed by the National Institute of Standards and Technology (NIST), overall, E85 generates approximately 6% to 108% (23% on average) greater impact compared with gasoline, depending on where corn is produced, primarily because corn production induces significant eutrophication impacts and requires intensive irrigation. If GHG emissions from the indirect land use changes are considered, the differences increase to between 16% and 118% (33% on average). Our study indicates that replacing gasoline with corn ethanol may only result in shifting the net environmental impacts primarily toward increased eutrophication and greater water scarcity. These results suggest that the environmental criteria used in the Energy Independence and Security Act (EISA) be re-evaluated to include additional categories of environmental impact beyond GHG emissions.
Sell, B. K.; Sadler, P.; Leslie, S.; Mitchell, C.; Samson, S. D.
The abundant exposures of Mohawkian (late Sandbian to early Katian) sedimentary rocks in eastern North America have been well-studied for insights into the Taconic orogeny and potential petroleum sources. Considerable information has been published toward establishing the sequence stratigraphic architecture, biozones for conodonts, graptolites and chitinozoans, chemostratigraphic correlations and a tephrochronologic framework. And yet, correlation remains difficult. Problems arise from complex sedimentary facies changes across the Laurentian margin and associated provincalism of the faunas. The difficulties are exacerbated by some imprecise usage of bentonite names, the short time spans of key stratigraphic sections, and a paucity of sections with muliple kinds of information. Also, linking so many taxon range end, ash-fall, and stable isotope excursion events into a coherent stratigraphic sequence is a daunting numerical problem. It falls into the notorious "NP-Complete" category because the number of possible solutions grows so fast as the number of events increases. "Simulated annealing" is one of the algorithms developed for such problems. We adopt it to solve the stratigraphic sequencing problem as a constrained optimization (CONOP). Nevertheless, to realize the full potential, more bentonite charactization and dating is needed in sections with detailed range charts for fossil species. CONOP works best with the individual taxon ranges, not the derived biozone boundaries. We examine the potential resolving power of CONOP in the context of a re-evaluation of bentonite correlations and newly acquired CA-TIMS U-Pb zircon dates from sections with rich biostratigraphic data. In particular we use 206Pb/238U zircon dates from two bentonites in the Womble Shale at the Katian Global Stratotype Section and Point (452.8 ± 0.2 and 453.5 ± 0.3 Ma, weighted mean with 2σ internal error) to compare various correlations with other dated bentonites in eastern North America
Zeigler, R. A.; Jolliff, B. L.; Korotev, R. L.; Shearer, C. K.
A mission to land in the giant South Pole-Aitken (SPA) Basin on the Moon's southern farside and return a sample to Earth for analysis is a high priority for Solar System Science. Such a sample would be used to determine the age of the SPA impact; the chronology of the basin, including the ages of basins and large impacts within SPA, with implications for early Solar System dynamics and the magmatic history of the Moon; the age and composition of volcanic rocks within SPA; the origin of the thorium signature of SPA with implications for the origin of exposed materials and thermal evolution of the Moon; and possibly the magnetization that forms a strong anomaly especially evident in the northern parts of the SPA basin. It is well known from studies of the Apollo regolith that rock fragments found in the regolith form a representative collection of many different rock types delivered to the site by the impact process (Fig. 1). Such samples are well documented to contain a broad suite of materials that reflect both the local major rock formations, as well as some exotic materials from far distant sources. Within the SPA basin, modeling of the impact ejection process indicates that regolith would be dominated by SPA substrate, formed at the time of the SPA basin-forming impact and for the most part moved around by subsequent impacts. Consistent with GRAIL data, the SPA impact likely formed a vast melt body tens of km thick that took perhaps several million years to cool, but that nonetheless represents barely an instant in geologic time that should be readily apparent through integrated geochronologic studies involving multiple chronometers. It is anticipated that a statistically significant number of age determinations would yield not only the age of SPA but also the age of several prominent nearby basins and large craters within SPA. This chronology would provide a contrast to the Imbrium-dominated chronology of the nearside Apollo samples and an independent test of
Gupta, R. K.; Bhunia, A. K.; Roy, D.
In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.
Alexandr E. Anikin
Full Text Available The problem considered in the article is the following: in the Russian dialect dictionaries (first and foremost, the “Dictionary of Russian Dialects”, but also the dictionary of V. I. Dal’ and occasionally others we sometimes find distortions of lexical data (due to errors in recording, typos, etc. that lead to incorrect etymological and other solutions. For example, dial. otsúmivat’ ‘avert love’ has been explicated as a loan-word from Turkish süm ‘love’, when in reality we have to do with a distortion of otsúshivat’, an antonym of prisushít’ ‘make lovesick’, cf. sushít’. According to the article, the solution to the problem lies in allowing conjectures for dialectal words as one of the resources of Russian etymology. A special concern in this study is the «Dictionary of the Russian Dialects of Transbaikalia» by L.E. Eliasov. The article argues that it contains distorted lexical data such as son ‘sweet meat’ instead of expected sok ‘very tasty meat’.
Komarov, Yu. A.
An analysis and some generalizations of approaches to risk assessments are presented. Interconnection between different interpretations of the "risk" notion is shown, and the possibility of applying the fuzzy set theory to risk assessments is demonstrated. A generalized formulation of the risk assessment notion is proposed in applying risk-oriented approaches to the problem of enhancing reliability and safety in nuclear power engineering. The solution of problems using the developed risk-oriented approaches aimed at achieving more reliable and safe operation of NPPs is described. The results of studies aimed at determining the need (advisability) to modernize/replace NPP elements and systems are presented together with the results obtained from elaborating the methodical principles of introducing the repair concept based on the equipment technical state. The possibility of reducing the scope of tests and altering the NPP systems maintenance strategy is substantiated using the risk-oriented approach. A probabilistic model for estimating the validity of boric acid concentration measurements is developed.
Full Text Available Proposing a robust designed facility location is one of the most effective ways to hedge against unexpected disruptions and failures in a transportation network system. This paper considers the combined facility location/network design problem with regard to transportation link disruptions and develops a mixed integer linear programming formulation to model it. With respect to the probability of link disruptions, the objective function of the model minimizes the total costs, including location costs, link construction costs and also the expected transportation costs. An efficient hybrid algorithm based on LP relaxation and variable neighbourhood search metaheuristic is developed in order to solve the mathematical model. Numerical results demonstrate that the proposed hybrid algorithm has suitable efficiency in terms of duration of solution time and determining excellent solution quality.
The numerical approximation of the solutions of fluid flows models is a difficult problem in many cases of energy research. In all numerical methods implementable on digital computers, a basic question is if the number N of elements (Galerkin modes, finite-difference cells, finite-elements, etc.) is sufficient to describe the long time behavior of the exact solutions. It was shown using several approaches that some of the estimates based on physical intuition of N are rigorously valid under very general conditions and follow directly from the mathematical theory of the Navier-Stokes equations. Among the mathematical approaches to these estimates, the most promising (which can be and was already applied to many other dissipative partial differential systems) consists in giving upper estimates to the fractal dimension of the attractor associated to one (or all) solution(s) of the respective partial differential equations. 56 refs
Nilsson, Anders; Magnusson, Kristoffer; Carlbring, Per; Andersson, Gerhard; Gumpert, Clara Hellner
Problem gambling creates significant harm for the gambler and for concerned significant others (CSOs). While several studies have investigated the effects of individual cognitive behavioral therapy (CBT) for problem gambling, less is known about the effects of involving CSOs in treatment. Behavioral couples therapy (BCT) has shown promising results when working with substance use disorders by involving both the user and a CSO. This pilot study investigated BCT for problem gambling, as well as the feasibility of performing a larger scale randomized controlled trial. 36 participants, 18 gamblers and 18 CSOs, were randomized to either BCT or individual CBT for the gambler. Both interventions were Internet-delivered self-help interventions with therapist support. Both groups of gamblers improved on all outcome measures, but there were no differences between the groups. The CSOs in the BCT group lowered their scores on anxiety and depression more than the CSOs of those randomized to the individual CBT group did. The implications of the results and the feasibility of the trial are discussed.
Travel delays and associated costs have become a major problem in Michigan over the : past several decades as congestion has continued to increase, creating significant negative : impacts on travel reliability on many roadways throughout the State. T...
Methods for obtaining the intensity of X-ray diffraction by one-dimensional by disordered lattices have been studied, and matrix method was developed. The method has been applied for structural analysis. Several problems concerning neutron diffraction were shown in the course of analysis. Large single crystals should be used for measurement. It is hard to grasp the local variation of structure. The technique of topography is still in development. Measurement of weak intensity diffraction is not sufficient. Technique of photography to observe overall feature is not good. General remarks concerning the one-dimensionally disordered lattices are as follows. A large number of parameters for analysis are not practical, and the disorder parameters are preferably two. In case of the disorder between two kinds of layers having same frequency and different structure, peak shift is not caused, and Laue term remains at the position. Reliability of the structural analysis of liquid and amorphous solid is discussed. The analysis is basically the analysis two atom molecule of same kind of atoms. The intensity of diffraction can be obtained from radial distribution function (RDF). Since practical observation is limited to a finite region, termination effect should be taken into consideration. Accuracy of analysis is not good in case of X-ray diffraction. The analysis by neutron diffraction is preferable. (Kato, T.)
Zhang, Enze; Wu, Yifei; Chen, Qingwei
This paper proposes a practical approach, combining bare-bones particle swarm optimization and sensitivity-based clustering for solving multi-objective reliability redundancy allocation problems (RAPs). A two-stage process is performed to identify promising solutions. Specifically, a new bare-bones multi-objective particle swarm optimization algorithm (BBMOPSO) is developed and applied in the first stage to identify a Pareto-optimal set. This algorithm mainly differs from other multi-objective particle swarm optimization algorithms in the parameter-free particle updating strategy, which is especially suitable for handling the complexity and nonlinearity of RAPs. Moreover, by utilizing an approach based on the adaptive grid to update the global particle leaders, a mutation operator to improve the exploration ability and an effective constraint handling strategy, the integrated BBMOPSO algorithm can generate excellent approximation of the true Pareto-optimal front for RAPs. This is followed by a data clustering technique based on difference sensitivity in the second stage to prune the obtained Pareto-optimal set and obtain a small, workable sized set of promising solutions for system implementation. Two illustrative examples are presented to show the feasibility and effectiveness of the proposed approach
Full Text Available The research analyzes recommendations existing in different sources of information for the choice of methods of strengthening and reconditioning of worn machine parts. These methods include: the method of electric arc deposition, chemical-thermal treatment, gas-powder deposition, gas-powder and plasma spraying, electric arc metallization. As a result of studies of wear of the working surfaces of the plates of silicate brick press boxes, we define that the plates wear out unevenly and the thickness of the worn layer varies between 0.3 ... 2 mm. Technological method is chosen as the method of the plate reliability enhancement and maintaining. One of the main technological stages of reliability formation is machine parts strengthening using the methods of strengthening technologies, namely electric arc metallization. Wire models Нп-65Г, ФМИ-2, Нп-40Х13 are used to develop wear-resistant coatings with desired properties. Technological process of the plates repair consists of the following basic operations: plate preparation, wire preparation, plate coating, plate grinding, final checking. Single and complex reliability indicators are determined by testing a set of the plates and registering all the indicators (operating time, failures, faults. The value of the economic reliability index of the plate Kе equals to 0,10. Higher plate reliability is achieved at the expense of extra cost for plate strengthening using wire Нп-40Х13, and the price of Bн plate reliability is 104,83 UAH. Complex indicators of reliability of the reconditioned plate of the silica bricks press boxes are used for more complete reliability assessment. Availability coefficient Kг. equals to 0,995 and characterizes two different properties simultaneously: reliability and maintainability. Coefficient of technical use Kт.в. equals to 0,974 and most fully characterizes the reliability of the plates because it considers time in the process of maintenance, repair and
Full Text Available In the paper the concept of a system structure with particular emphasis on the reliability structure has been presented. Advantages and disadvantages of modeling the reliability structure of a system using reliability block diagrams (RBD have been shown. RBD models of a marine steam‐water system constructed according to the concept of ‘multi‐component’, ‘one component’ and mixed models have been discussed. Critical remarks on the practical application of models which recognize only the structural surplus have been dealt with. The significant value of the model by professors Smalko and Jaźwiński called by them ‘default reliability structure’ has been pointed out. The necessity of building a new type of models: quality‐quantity, useful in the methodology developed by the author's multi-criteria analysis of importance of elements in the reliability structure of complex technical systems.
Bursa, V.; Holousova, M.; Turnik, S.
At Skoda Works in Plzen, data from the period of construction of nuclear power plants are processed on an ICL DRS 300 computer. The database systems DBASE II and DATAFLEX are available for creating reliability information systems. The information system that is being developed for WWER-1000 units is tested at the WWER-440 units of the Dukovany and Mochovce nuclear power plants. Activities in the field of evaluation of structure reliability of WWER-1000 nuclear power plants are aimed at two major goals, viz., developing a methodology for testing the reliability of the whole unit and its subsystems, and performing reliability analysis and calculations of reliability indices of the secondary circuit of a WWER-1000 nuclear power plant. The reason for the latter concern is the fact that in 1984-1986, secondary circuits contributed 46% to failures of Czechoslovak WWER-440 nuclear power plants. According to existing analyses, the time fluctuations of reliability indices obey no rule that could be employed for inferring indices expected in steady-state operating conditions from indices established in the starting stage of operation. (Z.M.). 10 refs
Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)
The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)
Cottrell, R.Les; Logg, Connie; Chhaparia, Mahesh; /SLAC; Grigoriev, Maxim; /Fermilab; Haro, Felipe; /Chile U., Catolica; Nazir, Fawad; /NUST, Rawalpindi; Sandford, Mark
End-to-End fault and performance problems detection in wide area production networks is becoming increasingly hard as the complexity of the paths, the diversity of the performance, and dependency on the network increase. Several monitoring infrastructures are built to monitor different network metrics and collect monitoring information from thousands of hosts around the globe. Typically there are hundreds to thousands of time-series plots of network metrics which need to be looked at to identify network performance problems or anomalous variations in the traffic. Furthermore, most commercial products rely on a comparison with user configured static thresholds and often require access to SNMP-MIB information, to which a typical end-user does not usually have access. In our paper we propose new techniques to detect network performance problems proactively in close to realtime and we do not rely on static thresholds and SNMP-MIB information. We describe and compare the use of several different algorithms that we have implemented to detect persistent network problems using anomalous variations analysis in real end-to-end Internet performance measurements. We also provide methods and/or guidance for how to set the user settable parameters. The measurements are based on active probes running on 40 production network paths with bottlenecks varying from 0.5Mbits/s to 1000Mbit/s. For well behaved data (no missed measurements and no very large outliers) with small seasonal changes most algorithms identify similar events. We compare the algorithms' robustness with respect to false positives and missed events especially when there are large seasonal effects in the data. Our proposed techniques cover a wide variety of network paths and traffic patterns. We also discuss the applicability of the algorithms in terms of their intuitiveness, their speed of execution as implemented, and areas of applicability. Our encouraging results compare and evaluate the accuracy of our
In Japan, while the interest in nuclear terrorism has increased which led law revision aiming for reinforcing physical protection in 2005, there is a growing concern in an insider threat in the nuclear industry. To cope with this threat, 'personnel reliability certification systems' are introduced in the United States and other nuclear industrized countries as an effective measure. The report examines current personnel reliability certification systems in Germany and the United States, and identifies common characteristics as well as key differences between two legal systems and regulations, and thereby attempts to identify the potential problems that the Japanese nuclear industry would face if such institutions as seen in Germany and the United States would be introduced in Japan, and suggests some measures to overcome these problems. The author suggests the following measures as practically useful essential in introducing similar systems in Japan: (1) introduction of comprehensive regulation (not industry specific) on personnel reliability certification systems, (2) clarification of conditions of prior consents by the employees, and (3) privacy protection procedures of the employees and information management. (author)
Kuijpers, R.C.W.M.; Otten, R.; Krol, N.P.C.M.; Vermulst, A.A.; Engels, R.C.M.E.
Children and youths' self-report of mental health problems is considered essential but complicated. This study examines the psychometric properties of the Dominic Interactive, a computerized DSM-IV based self-report questionnaire and explores informant correspondence. The Dominic Interactive was
Sidhu, Shailpreet K; Malhotra, Sita; Devi, Pushpa; Tuli, Arpandeep K
Coagulase negative Staphylococcus (CoNS) is frequently isolated from blood cultures but their significance is difficult to interpret. CoNS bacteria which are often previously dismissed as culture contaminants are attracting greater importance as true pathogens in the past decades. Clinical evaluation of these isolates suggests that although there is a relative increase of CoNS associated bloodstream infections in recent years, the microorganisms still remain the most common contaminants in blood cultures. The objective of this study was to determine the significance of CoNS isolated from blood cultures. A retrospective study was conducted to evaluate the rate of contamination in blood cultures in a tertiary care hospital. The paired specimens of blood were cultured using conventional culture methods and the isolates of coagulase negative staphylococci were identified by standard methodology. Clinical data, laboratory indices, microbiological parameters and patient characteristics were analyzed. Of 3503 blood samples, CoNS were isolated from blood culture of 307 patients (8.76%). The isolates were reported as true pathogens of bloodstream infections in only 74 out of 307 cases (24.1%). In the vast majority, 212 of 307 (69.0%), they were mere blood culture contaminants and reported as insignificant/contaminant. Determining whether a growth in the blood culture is a pathogen or a contaminant is a critical issue and multiple parameters have to be considered before arriving at a conclusion. Ideally, the molecular approach is for the most part a consistent method in determining the significant isolates of CoNS. However, in countries with inadequate resources, species identification and antibiogram tests are recommended when determining significance of these isolates.
Paradies, Guglielmo; Zullino, Francesca; Orofino, Antonio; Leggio, Samuele
Extragonadal teratomas are rare tumors in neonates and infants and can sometimes show unusual, distinctive feature such as an unusual location, a clinical sometimes acute, presentation and a "fetiform" histotype of the lesion. We have extrapolated, from our entire experience of teratomas, 4 unusual cases, mostly operated as emergencies; 2 of them were treated just after birth. Aim of this paper is to report the clinical and pathological findings, to evaluate the surgical approach and the long-term biological behaviour in these cases, in the light of survival and current insights reported in the literature. The Authors reviewed the most significant (Tables I and II) clinical, laboratory, radiologic, and pathologic findings, surgical procedures, early and long-term results in 4 children, 1 male and 3 females (M/F ratio: 1/3), suffering from extragonadal teratomas, located in the temporo-zygomatic region of the head (Case n. 1, Fig. 1), retroperitoneal space (Case n. 2, Fig. 2) ,liver (Case n. 3, Figg. 3-5), kidney (Case n. 4, Fig. 6, 7), respectively. Of the 4 patients, 2 were treated neonatally (1 T. of the head, 1 retroperitoneal T.) A prenatal diagnosis had already been made in 2 of the 4 patients, between the 2nd and 3rd trimester of pregnancy, All the infants were born by scheduled caesarean section in a tertiary care hospital and were the immediately referred to thew N.I.C.Us. Because of a mostly acute clinical presentation, the 4 patients were then referred to the surgical unit at different ages: 7 days, 28 days, 7 months, and 4 years respectively. The initial clinical presentation (Table II) was consistent with the site of the mass and/or its side effects. The 2 newborns (Case 1 and 2) both with a prenatally diagnosed mass located at the temporozygomatic region and in the abdominal cavite respectively, already displayed, at birth a mass with a tendency to further growth. The symptoms and signs described to the primary care physician by the parents of the 2
In safety problems of nuclear plants interest centres on various types of rare events. These may extend from rare modes of failures in the component parts of the system and plant, the simultaneous occurrence of a very low resistance of a structural member and an extremely high load, to rare catastrophic failures which affect whole plant and system complexes. The obvious need is therefore to understand the patterns of behaviour of these events in space and time domains and be able to make some adequate estimate of their probability of occurrence. In 1975, a Task Force of Experts on the Statistical Analysis of Rare Events in Nuclear Installations was set up, its main objective being to explore methods for handling the problems of reliability analysis involving rare events. The main topic area of the Ispra meeting concerns the statistical analysis of structures such as containment and pressure vessels, automatic protection systems, and allied items, with a view to giving a quantified probabilistic statement on particular reliability characteristics. The particular subjects covered by the invited experts who would normally be working in specialized fields, includes, specifically, statistical modelling of rare events, decision theory applied to rare events, small sampling theory in the case of rare events. Man-made phenomena as well as natural phenomena are also considered, as they involve different approaches to modelling. 22 papers are presented, with reports on group discussions
Wang, Jie; Xu, Fenghua
Using the Wuling Mountain area as a case study, the authors discuss the significance as well as five problems of developing information technology for vocational education in contiguous destitute areas. Recommendations are provided at the end of the article. [Translated by Michelle LeSourd.
Full Text Available Environmental pollution by mercury is a world-wide problem. Particularly floodplain ecosystems are frequently affected. One example is the Elbe River in Germany and its catchment areas; large amounts of Hg from a range of anthropogenic and geogenic sources have been accumulated in the soils of these floodplains. They serve as sink for Hg originating from the surface water of adjacent river. Today, the vastly elevated Hg contents of the floodplain soils at the Elbe River often exceed even the action values of the German Soil Conservation Law. This is especially important as Hg polluted areas at the Elbe River achieve several hundred square kilometres. Thus, authorities are coerced by law to conduct an appropriate risk assessment and to implement practical actions to eliminate or reduce environmental problems. A reliable risk assessment particularly with view to organisms (vegetation as green fodder and hay production, grazing and wild animals to avoid the transfer of Hg into the human food chain, requires an authentic determination of Hg fluxes and their dynamics since gaseous emissions from soil to atmosphere are an important pathway of Hg. However, reliable estimates of Hg fluxes from the highly polluted floodplain soils at the Elbe River and its tributaries, and its influencing factors are scarce. For this purpose, we have developed a new method to determine mercury emissions from soils at various sites. Our objectives were i to quantify seasonal variations of total gaseous mercury (TGM fluxes for floodplain soils at the Elbe River, ii to provide insights into physico-chemical processes regulating these TGM fluxes, and iii to quantify the impacts of the controlling factors soil temperature and soil water content on Hg volatilization from a typical contaminated floodplain soil within soil microcosm experiments under various controlled temperature and moisture conditions. Our study provides insight into TGM emissions from highly Hg
Masri, Karim R; Shaver, Timothy S; Shahouri, Shadi H; Wang, Shirley; Anderson, James D; Busch, Ruth E; Michaud, Kaleb; Mikuls, Ted R; Caplan, Liron; Wolfe, Frederick
To investigate what factors influence patient global health assessment (PtGlobal), and how those factors and the reliability of PtGlobal affect the rate, reliability, and validity of recently published American College of Rheumatology/European League Against Rheumatism (ACR/EULAR) rheumatoid arthritis (RA) remission criteria when used in clinical practice. We examined consecutive patients with RA in clinical practice and identified 77 who met ACR/EULAR joint criteria for remission (≤ 1 swollen joint and ≤ 1 tender joint). We evaluated factors associated with a PtGlobal > 1, because a PtGlobal ≤ 1 defined ACR/EULAR remission in this group of patients who had already met ACR/EULAR joint criteria. Of the 77 patients examined, only 17 (22.1%) had PtGlobal ≤ 1 and thus fully satisfied ACR/EULAR criteria. A large proportion of patients not in remission by ACR/EULAR criteria had high PtGlobal related to noninflammatory issues, including low back pain, fatigue, and functional limitations, and a number of patients clustered in the range of PtGlobal > 1 and ≤ 2. However, the minimal detectable difference for PtGlobal was 2.3. In addition, compared with a PtGlobal severity score, a PtGlobal activity score was 3.3% less likely to be abnormal (> 1). Noninflammatory factors contribute to the level of PtGlobal and result in the exclusion of many patients who would otherwise be in "true" remission according to the ACR/EULAR definition. Reliability problems associated with PtGlobal can also result in misclassification, and may explain the observation of low longterm remission rates in RA. As currently constituted, the use of the ACR/EULAR remission criteria in clinical practice appears to be problematic.
Full Text Available This paper presents a new multi-objective model for a vehicle routing problem under a stochastic uncertainty. It considers traffic point as an inflection point to deal with the arrival time of vehicles. It aims to minimize the total transportation cost, traffic pollution, customer dissatisfaction and maximizes the reliability of vehicles. Moreover, resiliency factors are included in the model to increase the flexibility of the system and decrease the possible losses that may impose on the system. Due to the NP-hardness of the presented model, a meta-heuristic algorithm, namely Simulated Annealing (SA is developed. Furthermore, a number of sensitivity analyses are provided to validate the effectiveness of the proposed model. Lastly, the foregoing meta-heuristic is compared with GAMS, in which the computational results demonstrate an acceptable performance of the proposed SA.
Tedesco, Serena; De Majo, Federica; Kim, Jieun; Trenti, Annalisa; Trevisi, Lucia; Fadini, Gian Paolo; Bolego, Chiara; Zandstra, Peter W; Cignarella, Andrea; Vitiello, Libero
Human peripheral-blood monocytes are used as an established in vitro system for generating macrophages. For several reasons, monocytic cell lines such as THP-1 have been considered as a possible alternative. In view of their distinct developmental origins and phenotypic attributes, we set out to assess the extent to which human monocyte-derived macrophages (MDMs) and phorbol-12-myristate-13-acetate (PMA)-differentiated THP-1 cells were overlapping across a variety of responses to activating stimuli. Resting (M0) macrophages were polarized toward M1 or M2 phenotypes by 48-h incubation with LPS (1 μg/ml) and IFN-γ (10 ng/ml) or with IL-4 (20 ng/ml) and IL-13 (5 ng/ml), respectively. At the end of stimulation, MDMs displayed more pronounced changes in marker gene expression than THP-1. Upon assaying an array of 41 cytokines, chemokines and growth factors in conditioned media (CM) using the Luminex technology, secretion of 29 out of the 41 proteins was affected by polarized activation. While in 12 of them THP-1 and MDM showed comparable trends, for the remaining 17 proteins their responses to activating stimuli did markedly differ. Quantitative comparison for selected analytes confirmed this pattern. In terms of phenotypic activation markers, measured by flow cytometry, M1 response was similar but the established MDM M2 marker CD163 was undetectable in THP-1 cells. In a beads-based assay, MDM activation did not induce significant changes, whereas M2 activation of THP-1 decreased phagocytic activity compared to M0 and M1. In further biological activity tests, both MDM and THP-1 CM failed to affect proliferation of mouse myogenic progenitors, whereas they both reduced adipogenic differentiation of mouse fibro-adipogenic progenitor cells (M2 to a lesser extent than M1 and M0). Finally, migration of human umbilical vein endothelial cells was enhanced by CM irrespective of cell type and activation state except for M0 CM from MDMs. In summary, PMA-differentiated THP-1
Addiction is one of the social problems which has a significant role in the spiritual and physical health of the person, family, and specially the health of the society. One of the main factors for continuance and intensity of involvement in addiction, is that the individuals, the network positive (healthy) relations is low. There fore, in this study, the role of the network positive (healthy) and the network negative (unhealthy) relations in degree (intensity) of the person’s involvement in ...
Svensson, Jessika; Romild, Ulla; Shepherdson, Emma
Research into the impact of problem gambling on close social networks is scarce with the majority of studies only including help-seeking populations. To date only one study has examined concerned significant others (CSOs) from an epidemiological perspective and it did not consider gender. The aim of this study is to examine the health, social support, and financial situations of CSOs in a Swedish representative sample and to examine gender differences. A population study was conducted in Sweden in 2008/09 (n = 15,000, response rate 63%). Respondents were defined as CSOs if they reported that someone close to them currently or previously had problems with gambling. The group of CSOs was further examined in a 1-year follow up (weighted response rate 74% from the 8,165 respondents in the original sample). Comparisons were also made between those defined as CSOs only at baseline (47.7%, n = 554) and those defined as CSOs at both time points. In total, 18.2% of the population were considered CSOs, with no difference between women and men. Male and female CSOs experienced, to a large extent, similar problems including poor mental health, risky alcohol consumption, economic hardship, and arguments with those closest to them. Female CSOs reported less social support than other women and male CSOs had more legal problems and were more afraid of losing their jobs than other men. One year on, several problems remained even if some improvements were found. Both male and female CSOs reported more negative life events in the 1 year follow-up. Although some relationships are unknown, including between the CSOs and the individuals with gambling problems and the causal relationships between being a CSO and the range of associated problems, the results of this study indicate that gambling problems not only affect the gambling individual and their immediate close family but also the wider social network. A large proportion of the population can be defined as a CSO, half of whom are
Magnusson, Kristoffer; Nilsson, Anders; Hellner Gumpert, Clara; Andersson, Gerhard; Carlbring, Per
About 2.3% of the adult population in Sweden are considered to suffer from problem gambling, and it is estimated that only 5% of those seek treatment. Problem gambling can have devastating effects on the economy, health and relationship, both for the individual who gambles and their concerned significant other (CSO). No empirically supported treatment exists for the CSOs of people with problem gambling. Consequently, the aim of this study is to develop and evaluate a programme aimed at CSOs of treatment-refusing problem gamblers. The programme will be based on principles from cognitive behavioural therapy (CBT) and motivational interviewing. To benefit as many CSOs as possible, the programme will be delivered via the internet with therapist support via encrypted email and short weekly conversations via telephone. This will be a randomised wait-list controlled internet-delivered treatment trial. A CBT programme for the CSOs of people with problem gambling will be developed and evaluated. The participants will work through nine modules over 10 weeks in a secure online environment, and receive support via secure emails and over the telephone. A total of 150 CSOs over 18 years of age will be included. Measures will be taken at baseline and at 3, 6 and 12 months. Primary outcomes concern gambling-related harm. Secondary outcomes include the treatment entry of the individual who gambles, the CSO's levels of depression, anxiety, as well as relationship satisfaction and quality of life. The protocol has been approved by the regional ethics board of Stockholm, Sweden. This study will add to the body of knowledge on how to protect CSOs from gambling-related harm, and how to motivate treatment-refusing individuals to seek professional help for problem gambling. NCT02250586. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Deans, N.D.; Miller, A.J.; Mann, D.P.
The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)
Lee, Sang Yong
This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.
Full Text Available Abstract Background Epidemiological studies show wide variability in the occurrence of cannabis smoking and related disorders across countries. This study aims to estimate cross-national variation in cannabis users' experience of clinically significant cannabis-related problems in three countries of the Americas, with a focus on cannabis users who may have tried alcohol or tobacco, but who have not used cocaine, heroin, LSD, or other internationally regulated drugs. Methods Data are from the World Mental Health Surveys Initiative and the National Latino and Asian American Study, with probability samples in Mexico (n = 4426, Colombia (n = 5,782 and the United States (USA; n = 8,228. The samples included 212 'cannabis only' users in Mexico, 260 in Colombia and 1,724 in the USA. Conditional GLM with GEE and 'exact' methods were used to estimate variation in the occurrence of clinically significant problems in cannabis only (CO users across these surveyed populations. Results The experience of cannabis-related problems was quite infrequent among CO users in these countries, with weighted frequencies ranging from 1% to 5% across survey populations, and with no appreciable cross-national variation in general. CO users in Colombia proved to be an exception. As compared to CO users in the USA, the Colombia smokers were more likely to have experienced cannabis-associated 'social problems' (odds ratio, OR = 3.0; 95% CI = 1.4, 6.3; p = 0.004 and 'legal problems' (OR = 9.7; 95% CI = 2.7, 35.2; p = 0.001. Conclusions This study's most remarkable finding may be the similarity in occurrence of cannabis-related problems in this cross-national comparison within the Americas. Wide cross-national variations in estimated population-level cumulative incidence of cannabis use disorders may be traced to large differences in cannabis smoking prevalence, rather than qualitative differences in cannabis experiences. More research is needed to identify conditions that
Somma, Antonella; Borroni, Serena; Maffei, Cesare; Giarolli, Laura E; Markon, Kristian E; Krueger, Robert F; Fossati, Andrea
In order to assess the reliability, factorial validity, and criterion validity of the Personality Inventory for DSM-5 (PID-5) among adolescents, 1,264 Italian high school students were administered the PID-5. Participants were also administered the Questionnaire on Relationships and Substance Use as a criterion measure. In the full sample, McDonald's ω values were adequate for the PID-5 scales (median ω = .85, SD = .06), except for Suspiciousness. However, all PID-5 scales showed average inter-item correlation values in the .20-.55 range. Exploratory structural equation modeling analyses provided moderate support for the a priori model of PID-5 trait scales. Ordinal logistic regression analyses showed that selected PID-5 trait scales predicted a significant, albeit moderate (Cox & Snell R 2 values ranged from .08 to .15, all ps < .001) amount of variance in Questionnaire on Relationships and Substance Use variables.
Enevoldsen, I.; Sørensen, John Dalsgaard
In this paper reliability-based optimization problems in structural engineering are formulated on the basis of the classical decision theory. Several formulations are presented: Reliability-based optimal design of structural systems with component or systems reliability constraints, reliability...
Jacobson, Mark Z; Delucchi, Mark A; Cameron, Mary A; Frew, Bethany A
This study addresses the greatest concern facing the large-scale integration of wind, water, and solar (WWS) into a power grid: the high cost of avoiding load loss caused by WWS variability and uncertainty. It uses a new grid integration model and finds low-cost, no-load-loss, nonunique solutions to this problem on electrification of all US energy sectors (electricity, transportation, heating/cooling, and industry) while accounting for wind and solar time series data from a 3D global weather model that simulates extreme events and competition among wind turbines for available kinetic energy. Solutions are obtained by prioritizing storage for heat (in soil and water); cold (in ice and water); and electricity (in phase-change materials, pumped hydro, hydropower, and hydrogen), and using demand response. No natural gas, biofuels, nuclear power, or stationary batteries are needed. The resulting 2050-2055 US electricity social cost for a full system is much less than for fossil fuels. These results hold for many conditions, suggesting that low-cost, reliable 100% WWS systems should work many places worldwide.
Rizzoli-Córdoba, Antonio; Ortega-Ríosvelasco, Fernando; Villasís-Keever, Miguel Ángel; Pizarro-Castellanos, Mariel; Buenrostro-Márquez, Guillermo; Aceves-Villagrán, Daniel; O'Shea-Cuevas, Gabriel; Muñoz-Hernández, Onofre
The Child Development Evaluation (CDE) is a screening tool designed and validated in Mexico for detecting developmental problems. The result is expressed through a semaphore. In the CDE test, both yellow and red results are considered positive, although a different intervention is proposed for each. The aim of this work was to evaluate the reliability of the CDE test to discriminate between children with yellow/red result based on the developmental domain quotient (DDQ) obtained through the Battelle Development Inventory, 2nd edition (in Spanish) (BDI-2). The information was obtained for the study from the validation. Children with a normal (green) result in the CDE were excluded. Two different cut-off points of the DDQ were used (BDI-2): social: 20.1% vs. 28.9%; and adaptive: 6.9% vs. 20.4%. The semaphore result yellow/red allows identifying different magnitudes of delay in developmental domains or subdomains, supporting the recommendation of different interventions for each one. Copyright © 2014 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.
Khalili-Damghani, Kaveh; Amiri, Maghsoud
In this paper, a procedure based on efficient epsilon-constraint method and data envelopment analysis (DEA) is proposed for solving binary-state multi-objective reliability redundancy allocation series-parallel problem (MORAP). In first module, a set of qualified non-dominated solutions on Pareto front of binary-state MORAP is generated using an efficient epsilon-constraint method. In order to test the quality of generated non-dominated solutions in this module, a multi-start partial bound enumeration algorithm is also proposed for MORAP. The performance of both procedures is compared using different metrics on well-known benchmark instance. The statistical analysis represents that not only the proposed efficient epsilon-constraint method outperform the multi-start partial bound enumeration algorithm but also it improves the founded upper bound of benchmark instance. Then, in second module, a DEA model is supplied to prune the generated non-dominated solutions of efficient epsilon-constraint method. This helps reduction of non-dominated solutions in a systematic manner and eases the decision making process for practical implementations. - Highlights: ► A procedure based on efficient epsilon-constraint method and DEA was proposed for solving MORAP. ► The performance of proposed procedure was compared with a multi-start PBEA. ► Methods were statistically compared using multi-objective metrics.
Altenstein-Yamanaka, David; Zimmermann, Johannes; Krieger, Tobias; Dörig, Nadja; Grosse Holtforth, Martin
Interpersonal factors play a major role in causing and maintaining depression. This study sought to investigate how patients' self-perceived interpersonal problems and impact messages as perceived by significant others are interrelated, change over therapy, and differentially predict process and outcome in psychotherapy of depression. For the present study, we used data from 144 outpatients suffering from major depression that were treated within a psychotherapy study. Interpersonal variables were assessed pre- and posttherapy with the self-report Inventory of Interpersonal Problems-Circumplex Scale (IIP-32; Thomas, Brähler, & Strauss, 2011) and with the informant-based Impact Message Inventory (Caspar, Berger, Fingerle, & Werner, 2016). Patients' levels on the dimensions of Agency and Communion were calculated from both measures; their levels on Interpersonal Distress were measured with the IIP. Depressive and general symptomatology was assessed at pre-, post-, and at 3-month follow-up; patient-reported process measures were assessed during therapy. The Agency scores of IIP and IMI correlated moderately, but the Communion scores did not. IIP Communion was positively associated with the quality of the early therapeutic alliance and with the average level of cognitive-emotional processing during therapy. Whereas IIP Communion and IMI Agency increased over therapy, IIP Distress decreased. A pre-post-decrease in IIP Distress was positively associated with pre-postsymptomatic change over and above the other interpersonal variables, but pre-post-increase in IMI Agency was positively associated with symptomatic improvement from post- to 3-month follow-up. These findings suggest that significant others seem to provide important additional information about the patients' interpersonal style. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
de Boer, J. B.; Sprangers, M. A.; Aaronson, N. K.; Lange, J. M.; van Dam, F. S.
The objective of this study was to evaluate the feasibility, reliability, validity and responsiveness of the HIV Overview of Problems Evaluation System (HOPES) in a Dutch sample. The HOPES was administered three times in a one-year period to a sample of 106 outpatients with a symptomatic
Woods, D.D.; Hitchler, M.J.; Rumancik, J.A.
This chapter examines some problems in current methods to assess reactor operator reliability at cognitive tasks and discusses new approaches to solve these problems. The two types of human failures are errors in the execution of an intention and errors in the formation/selection of an intention. Topics considered include the types of description, error correction, cognitive performance and response time, the speed-accuracy tradeoff function, function based task analysis, and cognitive task analysis. One problem of human reliability analysis (HRA) techniques in general is the question of what are the units of behavior whose reliability are to be determined. A second problem for HRA is that people often detect and correct their errors. The use of function based analysis, which maps the problem space for plant control, is recommended
Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo
Petersen, Kurt Erling
Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...
Sander, P.; Badoux, R.
The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.
A lot of results in mechanical design are obtained from a modelisation of physical reality and from a numerical solution which would lead to the evaluation of needs and resources. The goal of the reliability analysis is to evaluate the confidence which it is possible to grant to the chosen design through the calculation of a probability of failure linked to the retained scenario. Two types of analysis are proposed: the sensitivity analysis and the reliability analysis. Approximate methods are applicable to problems related to reliability, availability, maintainability and safety (RAMS)
This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be
Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.
Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.
[Conclusions] The study provides evidence about moderate intensification of health anxiety among polish adolescent. Health anxiety level was significantly higher among medical students versus non-medical students group.
Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)
Donohue, Brad; Teichner, Gordon; Azrin, Nathan; Weintraub, Noah; Crum, Thomas A.; Murphy, Leah; Silver, N. Clayton
Responses to Life Satisfaction Scale for Problem Youth (LSSPY) items were examined in a sample of 193 substance abusing and conduct disordered adolescents. In responding to the LSSPY, youth endorse their percentage of happiness (0 to 100%) in twelve domains (i.e., friendships, family, school, employment/work, fun activities, appearance, sex…
This project explores two major components that affect transit ridership: travel time reliability and rider : retention. It has been recognized that transit travel time reliability may have a significant impact on : attractiveness of transit to many ...
Lindvold, Lars R.; Hermann, Gregers G.
Photo dynamic diagnosis (PDD) is a convenient and well-documented procedure for diagnosis of bladder cancer and tumours using endoscopic techniques. At present, this procedure is available only for routine use in an operating room (OR) and often with substantial photobleaching effects of the photosensitizer. We present a novel optical design of the endoscopic PDD procedure that allows the procedure to be performed in the outpatient department (OPD) and not only in the OR. Thereby, inpatient procedures lasting 1-2 days may be replaced by a few hours lasting procedure in the OPD. Urine blurs the fluorescence during PDD used in the OPD. Urine contains fluorescent metabolites that are excited by blue light giving an opaque green fluorescence confounding the desired red fluorescence (PDD) from the tumour tissue. Measurements from the clinical situation has shown that some systems for PPD based on blue light illumination (PDD mode) and white light illumination used for bladder tumour diagnosis and surgery suffers some inherent disadvantages, i.e., photo bleaching in white light that impairs the possibility for PDD as white light usually is used before the blue light for PDD. Based on spectroscopic observations of urine and the photoactive dye Protoporphyrin IX used in PDD a novel optical system for use with the cystoscope has been devised that solves the problem of green fluorescence from urine. This and the knowledge of photo-bleaching pitfalls in established systems make it possible to perform PDD of bladder tumours in the OPD and to improve PDD in the OR.
Schijndel, van A.
Problem description Electrical power grids serve to transport and distribute electrical power with high reliability and availability at acceptable costs and risks. These grids play a crucial though preferably invisible role in supplying sufficient power in a convenient form. Today’s society has
This book resulted from the activity of Task Force 4.2 - 'Human Reliability'. This group was established on February 27th, 1986, at the plenary meeting of the Technical Reliability Committee of VDI, within the framework of the joint committee of VDI on industrial systems technology - GIS. It is composed of representatives of industry, representatives of research institutes, of technical control boards and universities, whose job it is to study how man fits into the technical side of the world of work and to optimize this interaction. In a total of 17 sessions, information from the part of ergonomy dealing with human reliability in using technical systems at work was exchanged, and different methods for its evaluation were examined and analyzed. The outcome of this work was systematized and compiled in this book. (orig.) [de
inverters connected in a chain. ................................................. 5 Figure 3 Typical graph showing frequency versus square root of...developing an experimental reliability estimating methodology that could both illuminate the lifetime reliability of advanced devices, circuits and...or FIT of the device. In other words an accurate estimate of the device lifetime was found and thus the reliability that can be conveniently
Jones, Harry W.
System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.
Dummer, Geoffrey W A; Hiller, N
Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea
RTE publishes a yearly reliability report based on a standard model to facilitate comparisons and highlight long-term trends. The 2013 report is not only stating the facts of the Significant System Events (ESS), but it moreover underlines the main elements dealing with the reliability of the electrical power system. It highlights the various elements which contribute to present and future reliability and provides an overview of the interaction between the various stakeholders of the Electrical Power System on the scale of the European Interconnected Network. (author)
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Want to buy some reliability? The question would have been unthinkable in some markets served by the natural gas business even a few years ago, but in the new gas marketplace, industrial, commercial and even some residential customers have the opportunity to choose from among an array of options about the kind of natural gas service they need--and are willing to pay for. The complexities of this brave new world of restructuring and competition have sent the industry scrambling to find ways to educate and inform its customers about the increased responsibility they will have in determining the level of gas reliability they choose. This article discusses the new options and the new responsibilities of customers, the needed for continuous education, and MidAmerican Energy Company's experiment in direct marketing of natural gas
Full Text Available For efficiently increasing the logistic systems, the core specialists’ attention has to be directed to reducing costs and increasing supply chains reliability. A decent attention to costs reduction has already been paid, so it can be stated that in this way there is a significant progress. But the problem of reliability evaluation is still insufficiently explored, particularly, in such an important sphere as inventory management at the dependent demand.
Delionback, L. M.
The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.
Sørensen, John Dalsgaard
The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...
Blomgren, J.C.; Green, S.J.
Upon successful completion of its research and development technology transfer program, the Electric Power Research Institute's Steam Generator Owners Group (SGOG II) will disband in December 1986 and be replaced in January 1987 by a successor project, the Steam Generator Reliability Project (SGRP). The new project, funded in the EPRI base program, will continue the emphasis on reliability and life extension that was carried forward by SGOG II. The objectives of SGOG II have been met. Causes and remedies have been identified for tubing corrosion problems, such as stress corrosion cracking and pitting, and steam generator technology has been improved in areas such as tube wear prediction and nondestructive evaluation (NDE). These actions have led to improved reliability of steam generators. Now the owners want to continue with a centrally managed program that builds on what has been learned. The goal is to continue to improve steam generator reliability and solve small problems before they become large problems
Blomgren, J.C.; Green, S.J.
Upon successful completion of its research and development technology transfer program, the Electric Power Research Institute's (EPRI's) Steam Generator Owners Group (SGOG II) will disband in December 1986, and be replaced in January 1987, by a successor project, the Steam Generator Reliability Project (SGRP). The new project, funded in the EPRI base program, will continue to emphasize reliability and life extension, which were carried forward by SGOG II. The objectives of SGOG II have been met. Causes and remedies have been identified for tubing corrosion problems such as stress corrosion cracking and pitting, and steam generator technology has been improved in areas such as tube wear prediction and nondestructive evaluation. These actions have led to improved reliability of steam generators. Now the owners want to continue with a centrally managed program that builds on what has been learned. The goal is to continue to improve steam generator reliability and to solve small problems before they become large problems
Sørensen, John Dalsgaard
The theoretical basis for reliability-based structural optimization within the framework of Bayesian statistical decision theory is briefly described. Reliability-based cost benefit problems are formulated and exemplitied with structural optimization. The basic reliability-based optimization...... problems are generalized to the following extensions: interactive optimization, inspection and repair costs, systematic reconstruction, re-assessment of existing structures. Illustrative examples are presented including a simple introductory example, a decision problem related to bridge re...
This book shows how to build in, evaluate, and demonstrate reliability and availability of components, equipment, systems. It presents the state-of-theart of reliability engineering, both in theory and practice, and is based on the author's more than 30 years experience in this field, half in industry and half as Professor of Reliability Engineering at the ETH, Zurich. The structure of the book allows rapid access to practical results. This final edition extend and replace all previous editions. New are, in particular, a strategy to mitigate incomplete coverage, a comprehensive introduction to human reliability with design guidelines and new models, and a refinement of reliability allocation, design guidelines for maintainability, and concepts related to regenerative stochastic processes. The set of problems for homework has been extended. Methods & tools are given in a way that they can be tailored to cover different reliability requirement levels and be used for safety analysis. Because of the Appendice...
Park, Kyoung Su
This book introduces reliability with definition of reliability, requirement of reliability, system of life cycle and reliability, reliability and failure rate such as summary, reliability characteristic, chance failure, failure rate which changes over time, failure mode, replacement, reliability in engineering design, reliability test over assumption of failure rate, and drawing of reliability data, prediction of system reliability, conservation of system, failure such as summary and failure relay and analysis of system safety.
A short historical account is given of the development of pressure vessel codes. The subject is then discussed under the headings: the cost of heat exchanger unreliability; degraded performance or failure; fouling; mal-distribution of flow; corrosion; erosion; vibration; thermal fatigue; corrosion fatigue; mal-operation; water hammer; conclusions. (U.K.)
Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.
Holt, James P.
The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.
Nuclear power plants and, in particular, reactor pressure boundary components have unique reliability requirements, in that usually no significant redundancy is possible, and a single failure can give rise to possible widespread core damage and fission product release. Reliability may be required for availability or safety reasons, but in the case of the pressure boundary and certain other systems safety may dominate. Possible Safety and Reliability (S and R) criteria are proposed which would produce acceptable reactor design. Without some S and R requirement the designer has no way of knowing how far he must go in analysing his system or component, or whether his proposed solution is likely to gain acceptance. The paper shows how reliability targets for given components and systems can be individually considered against the derived S and R criteria at the design and construction stage. Since in the case of nuclear pressure boundary components there is often very little direct experience on which to base reliability studies, relevant non-nuclear experience is examined. (author)
Gutscher, W.D.; Johnson, K.J.
Most of the failures in Scyllac can be related to crowbar trigger cable faults. A new cable has been designed, procured, and is currently undergoing evaluation. When the new cable has been proven, it will be worked into the system as quickly as possible without causing too much additional down time. The cable-tip problem may not be easy or even desirable to solve. A tightly fastened permanent connection that maximizes contact area would be more reliable than the plug-in type of connection in use now, but it would make system changes and repairs much more difficult. The balance of the failures have such a low occurrence rate that they do not cause much down time and no major effort is underway to eliminate them. Even though Scyllac was built as an experimental system and has many thousands of components, its reliability is very good. Because of this the experiment has been able to progress at a reasonable pace
Nourelfath, Mustapha; Nahas, Nabil
The use of neural networks in the reliability optimization field is rare. This paper presents an application of a recent kind of neural networks in a reliability optimization problem for a series system with multiple-choice constraints incorporated at each subsystem, to maximize the system reliability subject to the system budget. The problem is formulated as a nonlinear binary integer programming problem and characterized as an NP-hard problem. Our design of neural network to solve efficiently this problem is based on a quantized Hopfield network. This network allows us to obtain optimal design solutions very frequently and much more quickly than others Hopfield networks
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Sørensen, John Dalsgaard
Reliability based code calibration is considered in this paper. It is described how the results of FORM based reliability analysis may be related to the partial safety factors and characteristic values. The code calibration problem is presented in a decision theoretical form and it is discussed how...... of reliability based code calibration of LRFD based design codes....
Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.
Faber, M.H.; Sørensen, John Dalsgaard
The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....
Basu, Asit P; Basu, Sujit K
This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul
The Second Edition of this well-received textbook presents over a decade of new research in power system reliability-while maintaining the general concept, structure, and style of the original volume. This edition features new chapters on the growing areas of Monte Carlo simulation and reliability economics. In addition, chapters cover the latest developments in techniques and their application to real problems. The text also explores the progress occurring in the structure, planning, and operation of real power systems due to changing ownership, regulation, and access. This work serves as a companion volume to Reliability Evaluation of Engineering Systems: Second Edition (1992).
This report describes the Interactive Reliability Analysis Project and demonstrates the advantages of using computer-aided design systems (CADS) in reliability analysis. Common cause failure problems require presentations of systems, analysis of fault trees, and evaluation of solutions to these. Results have to be communicated between the reliability analyst and the system designer. Using a computer-aided design system saves time and money in the analysis of design. Computer-aided design systems lend themselves to cable routing, valve and switch lists, pipe routing, and other component studies. At EG and G Idaho, Inc., the Applicon CADS is being applied to the study of water reactor safety systems
Every year, RTE produces a reliability report for the past year. This report includes a number of results from previous years so that year-to-year comparisons can be drawn and long-term trends analysed. The 2015 report underlines the major factors that have impacted on the reliability of the electrical power system, without focusing exclusively on Significant System Events (ESS). It describes various factors which contribute to present and future reliability and the numerous actions implemented by RTE to ensure reliability today and in the future, as well as the ways in which the various parties involved in the electrical power system interact across the whole European interconnected network
Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo
Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553
Parkinson, D.B.; Oestergaard, C.
1 - Description of problem or function: Calculation of the reliability index given the failure boundary. A linearization point (design point) is found on the failure boundary for a stationary reliability index (min) and a stationary failure probability density function along the failure boundary, provided that the basic variables are normally distributed. 2 - Method of solution: Iteration along the failure boundary which must be specified - together with its partial derivatives with respect to the basic variables - by the user in a subroutine FSUR. 3 - Restrictions on the complexity of the problem: No distribution information included (first-order-second-moment-method). 20 basic variables (could be extended)
Boner, G.L.; Hanners, H.W.
The University of Dayton Research Institute has concluded a program designed to provide NRC/DOR with technical assistance in evaluating the factors leading to improved reliability of on-site emergency diesel generator (DG) units. The program consisted of a comprehensive review of DG maintenance and operating experience and a comparative evaluation of the DG manufacturer's recommendations. This information, will enable the NRC to improve the basis on which it makes regulatory decisions. The primary goal of the program is to better identify the main problem areas which decrease the reliability of the DG units and make recommendations. The report has attained the program objectives by identifying and discussing the more significant problems and presenting the recommended corrective actions. The identified problems have been categorized into three groups as a function of their significance
In order to obtain public understanding on nuclear power plants, tests should be carried out to prove the reliability and safety of present LWR plants. For example, the aseismicity of nuclear power plants must be verified by using a large scale earthquake simulator. Reliability test began in fiscal 1975, and the proof tests on steam generators and on PWR support and flexure pins against stress corrosion cracking have already been completed, and the results have been internationally highly appreciated. The capacity factor of the nuclear power plant operation in Japan rose to 80% in the summer of 1983, and considering the period of regular inspection, it means the operation of almost full capacity. Japanese LWR technology has now risen to the top place in the world after having overcome the defects. The significance of the reliability test is to secure the functioning till the age limit is reached, to confirm the correct forecast of deteriorating process, to confirm the effectiveness of the remedy to defects and to confirm the accuracy of predicting the behavior of facilities. The reliability of nuclear valves, fuel assemblies, the heat affected zones in welding, reactor cooling pumps and electric instruments has been tested or is being tested. (Kako, I.)
Sørensen, John Dalsgaard
Reliability assessment, optimal design and optimal operation and maintenance of wind turbines are an area of significant interest for the fast growing wind turbine industry for sustainable production of energy. Offshore wind turbines in wind farms give special problems due to wake effects inside...... the farm. Reliability analysis and optimization of wind turbines require that the special conditions for wind turbine operation are taken into account. Control of the blades implies load reductions for large wind speeds and parking for high wind speeds. In this paper basic structural failure modes for wind...... turbines are described. Further, aspects are presented related to reliability-based optimization of wind turbines, assessment of optimal reliability level and operation and maintenance....
Lim, Tae Jin
This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.
The goal of this paper is to develop tools for technology evaluation that address questions involving the economics of large-scale systems. The kind of cost discussed usually involves some dynamic aspect of the energy system. In particular, such properties as flexibility, stability, and resilience are features of entire systems. Special attention must be paid to the question of reliability, i.e., availability on demand. The storage problem and the planning for reliability in utility systems are the subjects of this paper. The introductory chapter addresses preliminary definitions--reliability planning, uncertainty, resilience, and other sensitivities. The study focuses on the contrast between conventional power generation technologies with controllable output and intermittent resources such as wind and solar electric conversion devices. The system studied is a stylized representation of California conditions. Significant differences were found in reliability planning requirements (and therefore costs) for systems dominated by central station plants as opposed to those dominated by intermittent resource technologies. It is argued that existing hydroelectric facilities need re-optimization. These plants provide the only currently existing bulk power storage in electric energy systems. 38 references. (MCW)
Sonnemans, P.J.M.; Geudens, W.H.J.M.
This paper addresses the problem of proper reliability management in business operations today, facing increasing demands on essential business drivers such as time to market, quality and financial profit. In this paper a general method is described of how to achieve product quality in a highly
Mcinroy, John E.; Saridis, George N.
Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.
Sørensen, John Dalsgaard; Thoft-Christensen, Palle
During the last 25 years considerable progress has been made in the fields of structural optimization and structural reliability theory. In classical deterministic structural optimization all variables are assumed to be deterministic. Due to the unpredictability of loads and strengths of actual......]. In this paper we consider only structures which can be modelled as systems of elasto-plastic elements, e.g. frame and truss structures. In section 2 a method to evaluate the reliability of such structural systems is presented. Based on a probabilistic point of view a modern structural optimization problem...... is formulated in section 3. The formulation is a natural extension of the commonly used formulations in determinstic structural optimization. The mathematical form of the optimization problem is briefly discussed. In section 4 two new optimization procedures especially designed for the reliability...
Sherif S. AbdelSalam
Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.
We consider the problem of estimating the probability of survival (non-failure) and the probability of safe operation (strength greater than a limiting value) of structures subjected to random loads. These probabilities are formulated in terms of the probability distributions of the loads...... propagation stage. The consequences of this behaviour on the fatigue reliability are discussed....
Kashuba, Oleh Ivanovych; Skliarov, L I; Skliarov, A L
The problems of improving reliability of an electric blasting method using electric detonators with nichrome filament bridges. It was revealed that in the calculation of the total resistance of the explosive network it is necessary to increase to 24% of the nominal value
Ochoa, David; Juan, David; Valencia, Alfonso; Pazos, Florencio
The evolution of proteins cannot be fully understood without taking into account the coevolutionary linkages entangling them. From a practical point of view, coevolution between protein families has been used as a way of detecting protein interactions and functional relationships from genomic information. The most common approach to inferring protein coevolution involves the quantification of phylogenetic tree similarity using a family of methodologies termed mirrortree. In spite of their success, a fundamental problem of these approaches is the lack of an adequate statistical framework to assess the significance of a given coevolutionary score (tree similarity). As a consequence, a number of ad hoc filters and arbitrary thresholds are required in an attempt to obtain a final set of confident coevolutionary signals. In this work, we developed a method for associating confidence estimators (P values) to the tree-similarity scores, using a null model specifically designed for the tree comparison problem. We show how this approach largely improves the quality and coverage (number of pairs that can be evaluated) of the detected coevolution in all the stages of the mirrortree workflow, independently of the starting genomic information. This not only leads to a better understanding of protein coevolution and its biological implications, but also to obtain a highly reliable and comprehensive network of predicted interactions, as well as information on the substructure of macromolecular complexes using only genomic information. The software and datasets used in this work are freely available at: http://csbg.cnb.csic.es/pMT/. firstname.lastname@example.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Chen Zigen; LI Xingyuan; Shuai Xiaoping.
It is necessary that instruments are calibrated accurately in order to obtain reliable survey data of surface contamination. Some problems in calibrating surface contamination meters are expounded in this paper. Measurement comparison for beta surface contamination meters is organized within limited scope, thus survey quality is understood, questions are discovered, significance of calibration is expounded further. (Author)
... has developed reliability growth methodology for all phases of the process, from planning to tracking to projection. The report presents this methodology and associated reliability growth concepts.
Karki, Rajesh; Verma, Ajit Kumar
The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti
This paper is concerned with two topics on structural systems reliability theory. One covers automatic generation of failure mode equations, identifications of stochastically dominant failure modes, and reliability assessment of redundant structures. Reduced stiffness matrixes and equivalent nodal forces representing the failed elements are introduced for expressing the safety of the elements, using a matrix method. Dominant failure modes are systematically selected by a branch-and-bound technique and heuristic operations. The other discusses the various optimum design problems based on reliability concept. Those problems are interpreted through a solution to a multi-objective optimization problem. (orig.)
This paper is concerned with two topics on structural systems reliability theory. One covers automatic generation of failure mode equations, identifications of stochastically dominant failure modes, and reliability assessment of redundant structures. Reduced stiffness matrixes and equivalent nodal forces representing the failed elements are introduced for expressing the safety of the elements, using a matrix method. Dominant failure modes are systematically selected by a branch-and-bound technique and heuristic operations. The other discusses the various optimum design problems based on reliability concept. Those problems are interpreted through a solution to a multi-objective optimization problem.
Hai An; Ling Zhou; Hui Sun
Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new...
If fewer forced outages are a sign of improved safety, nuclear power plants have become safer and more productive. There has been a significant improvement in nuclear power plant performance, due largely to a decline in the forced outage rate and a dramatic drop in the average number of forced outages per fuel cycle. If fewer forced outages are a sign of improved safety, nuclear power plants have become safer and more productive over time. To encourage further increases in performance, regulatory incentive schemes should reward reactor operators for improved reliability and safety, as well as for improved performance
Jones, Harry W.
A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and
Lalli, V. R.
This paper describes an example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems. This particular application was for a solar cell power system demonstration project in Tangaye, Upper Volta, Africa. The techniques involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of a fail-safe and planned spare parts engineering philosophy.
Maintaining reliable machine operations for existing machines as well as planning for future machines' operability present significant challenges to those responsible for system performance and improvement. Changes to machine requirements and beam specifications often reduce overall machine availability in an effort to meet user needs. Accelerator reliability issues from around the world will be presented, followed by a discussion of the major factors influencing machine availability.
Grö tzinger, Stefan W.; Alam, Intikhab; Ba Alawi, Wail; Bajic, Vladimir B.; Stingl, Ulrich; Eppinger, Jö rg
Reliable functional annotation of genomic data is the key-step in the discovery of novel enzymes. Intrinsic sequencing data quality problems of single amplified genomes (SAGs) and poor homology of novel extremophile's genomes pose significant
Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian
from reliability point of view. The present paper discusses the specifics of system reliability behavior of laminated composite sandwich panels, and solves an example system reliability problem for a glass fiber-reinforced composite sandwich structure subjected to in-plane compression.......Laminated composite sandwich panels have a layered structure, where individual layers have randomly varying stiffness and strength properties. The presence of multiple failure modes and load redistribution following partial failures are the reason for laminated composites to exhibit system behavior...
Baskoro, G.; Rouvroye, J.L.; Bacher, W.; Brombacher, A.C.
This paper describes the on-going research on an accelerated reliability test strategy called MESA (Multiple Environment Stress Analysis) intended to find in a fast and efficient manner (potential) reliability problems during the design phase of high volume consumer products. This test has shown
Lew, B.S.; Yee, D.; Brewer, W.K.; Quattro, P.J.; Kirby, K.D.
This paper reports that the Pacific Gas and Electric Company (PG and E), in cooperation with ABZ, Incorporated and Science Applications International Corporation (SAIC), investigated the use of artificial intelligence-based programming techniques to assist utility personnel in regulatory compliance problems. The result of this investigation is that artificial intelligence-based programming techniques can successfully be applied to this problem. To demonstrate this, a general methodology was developed and several prototype systems based on this methodology were developed. The prototypes address U.S. Nuclear Regulatory Commission (NRC) event reportability requirements, technical specification compliance based on plant equipment status, and quality assurance assistance. This collection of prototype modules is named the safety significance evaluation system
This book discusses reliability and radiation effects in compound semiconductors, which have evolved rapidly during the last 15 years. Johnston's perspective in the book focuses on high-reliability applications in space, but his discussion of reliability is applicable to high reliability terrestrial applications as well. The book is important because there are new reliability mechanisms present in compound semiconductors that have produced a great deal of confusion. They are complex, and appear to be major stumbling blocks in the application of these types of devices. Many of the reliability problems that were prominent research topics five to ten years ago have been solved, and the reliability of many of these devices has been improved to the level where they can be used for ten years or more with low failure rates. There is also considerable confusion about the way that space radiation affects compound semiconductors. Some optoelectronic devices are so sensitive to damage in space that they are very difficu...
Royset, J.O.; Der Kiureghian, A.; Polak, E.
A decoupling approach for solving optimal structural design problems involving reliability terms in the objective function, the constraint set or both is discussed and extended. The approach employs a reformulation of each problem, in which reliability terms are replaced by deterministic functions. The reformulated problems can be solved by existing semi-infinite optimization algorithms and computational reliability methods. It is shown that the reformulated problems produce solutions that are identical to those of the original problems when the limit-state functions defining the reliability problem are affine. For nonaffine limit-state functions, approximate solutions are obtained by solving series of reformulated problems. An important advantage of the approach is that the required reliability and optimization calculations are completely decoupled, thus allowing flexibility in the choice of the optimization algorithm and the reliability computation method
Full Text Available Abstract Background Molecular signatures are sets of genes, proteins, genetic variants or other variables that can be used as markers for a particular phenotype. Reliable signature discovery methods could yield valuable insight into cell biology and mechanisms of human disease. However, it is currently not clear how to control error rates such as the false discovery rate (FDR in signature discovery. Moreover, signatures for cancer gene expression have been shown to be unstable, that is, difficult to replicate in independent studies, casting doubts on their reliability. Results We demonstrate that with modern prediction methods, signatures that yield accurate predictions may still have a high FDR. Further, we show that even signatures with low FDR may fail to replicate in independent studies due to limited statistical power. Thus, neither stability nor predictive accuracy are relevant when FDR control is the primary goal. We therefore develop a general statistical hypothesis testing framework that for the first time provides FDR control for signature discovery. Our method is demonstrated to be correct in simulation studies. When applied to five cancer data sets, the method was able to discover molecular signatures with 5% FDR in three cases, while two data sets yielded no significant findings. Conclusion Our approach enables reliable discovery of molecular signatures from genome-wide data with current sample sizes. The statistical framework developed herein is potentially applicable to a wide range of prediction problems in bioinformatics.
Full Text Available Reliability and diagnostic are in general two problems discussed separately. Yet the two problems are in fact closely related to each other. Here, this relation is considered in the simple case of modular systems. We show, how the computation of reliability and diagnostic can efficiently be done within the same Bayesian network induced by the modularity of the structure function of the system.
Cannon, A.G.; Bendell, A.
Following an introductory chapter on Reliability, what is it, why it is needed, how it is achieved and measured, the principles of reliability data bases and analysis methodologies are the subject of the next two chapters. Achievements due to the development of data banks are mentioned for different industries in the next chapter, FACTS, a comprehensive information system for industrial safety and reliability data collection in process plants are covered next. CREDO, the Central Reliability Data Organization is described in the next chapter and is indexed separately, as is the chapter on DANTE, the fabrication reliability Data analysis system. Reliability data banks at Electricite de France and IAEA's experience in compiling a generic component reliability data base are also separately indexed. The European reliability data system, ERDS, and the development of a large data bank come next. The last three chapters look at 'Reliability data banks, - friend foe or a waste of time'? and future developments. (UK)
Little, S. [Suncor Energy, Calgary, AB (Canada)
Fleet maintenance and reliability at Suncor Energy was discussed in this presentation, with reference to Suncor Energy's primary and support equipment fleets. This paper also discussed Suncor Energy's maintenance and reliability standard involving people, processes and technology. An organizational maturity chart that graphed organizational learning against organizational performance was illustrated. The presentation also reviewed the maintenance and reliability framework; maintenance reliability model; the process overview of the maintenance and reliability standard; a process flow chart of maintenance strategies and programs; and an asset reliability improvement process flow chart. An example of an improvement initiative was included, with reference to a shovel reliability review; a dipper trip reliability investigation; bucket related failures by type and frequency; root cause analysis of the reliability process; and additional actions taken. Last, the presentation provided a graph of the results of the improvement initiative and presented the key lessons learned. tabs., figs.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing
Lüdeke, Andreas; Giachino, R
A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum . This contribution will describe the forum and advertise it’s usage in the community.
This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.
Smith, Mark A.; Atcitty, Stanley
This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.
Common factors and differences in the reliability of hardware and software; reliability increase by means of methods of software redundancy. Maintenance of software for long term operating behavior. (HP) [de
Hammons, Thomas J.; Voropai, Nikolai I.
This paper addresses the problems of power supply reliability in a market environment. The specific features of economic interrelations between the power supply organization and consumers in terms of reliability assurance are examined and the principles of providing power supply reliability are formulated. The economic mechanisms of coordinating the interests of power supply organization and consumers to provide power supply reliability are discussed. Reliability of restructuring China's powe...
Berg, Melanie; LaBel, Kenneth A.
This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?
exponencial distributions Weibull distribution, -xtimating reliability, confidence intervals, relia- bility growth, 0. P- curves, Bayesian analysis. 20 A S...introduction for those not familiar with reliability and a good refresher for those who are currently working in the area. LEWIS NERI, CHIEF...includes one or both of the following objectives: a) prediction of the current system reliability, b) projection on the system reliability for someI future
Gagnon, M; Tahan, S A; Bocher, P; Thibault, D
High Cycle Fatigue (HCF) plays an important role in Francis runner reliability. This paper presents a model in which reliability is defined as the probability of not exceeding a threshold above which HCF contributes to crack propagation. In the context of combined Low Cycle Fatigue (LCF) and HCF loading, the Kitagawa diagram is used as the limit state threshold for reliability. The reliability problem is solved using First-Order Reliability Methods (FORM). A study case is proposed using in situ measured strains and operational data. All the parameters of the reliability problem are based either on observed data or on typical design specifications. From the results obtained, we observed that the uncertainty around the defect size and the HCF stress range play an important role in reliability. At the same time, we observed that expected values for the LCF stress range and the number of LCF cycles have a significant influence on life assessment, but the uncertainty around these values could be neglected in the reliability assessment.
Thoft-Christensen, Palle; Nowak, Andrzej S.
The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....
The participants heard 51 papers dealing with the reliability of engineering products. Two of the papers were incorporated in INIS, namely ''Reliability comparison of two designs of low pressure regeneration of the 1000 MW unit at the Temelin nuclear power plant'' and ''Use of probability analysis of reliability in designing nuclear power facilities.''(J.B.)
Dumas, L. N.; Shumka, A.
During the first four years of the U.S. Department of Energy (DOE) National Photovoltatic Program, the Jet Propulsion Laboratory Low-Cost Solar Array (LSA) Project purchased about 400 kW of photovoltaic modules for test and experiments. In order to identify, report, and analyze test and operational problems with the Block Procurement modules, a problem/failure reporting and analysis system was implemented by the LSA Project with the main purpose of providing manufacturers with feedback from test and field experience needed for the improvement of product performance and reliability. A description of the more significant types of failures is presented, taking into account interconnects, cracked cells, dielectric breakdown, delamination, and corrosion. Current design practices and reliability evaluations are also discussed. The conducted evaluation indicates that current module designs incorporate damage-resistant and fault-tolerant features which address field failure mechanisms observed to date.
Statistical significant change versus relevant or important change in (quasi) experimental design : some conceptual and methodological problems in estimating magnitude of intervention-related change in health services research
Middel, Berrie; van Sonderen, Eric
This paper aims to identify problems in estimating and the interpretation of the magnitude of intervention-related change over time or responsiveness assessed with health outcome measures. Responsiveness is a problematic construct and there is no consensus on how to quantify the appropriate index to
Boesebeck, K.; Heuser, F.W.; Kotthoff, K.
The lecture gives a survey on the application of methods of reliability analysis to assess the safety of nuclear power plants. Possible statements of reliability analysis in connection with specifications of the atomic licensing procedure are especially dealt with. Existing specifications of safety criteria are additionally discussed with the help of reliability analysis by the example of the reliability analysis of a reactor protection system. Beyond the limited application to single safety systems, the significance of reliability analysis for a closed risk concept is explained in the last part of the lecture. (orig./LH) [de
Wear, L L; Pinkert, J R
In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.
The human factor's reliability program was at Slovenske elektrarne, a.s. (SE) nuclear power plants. introduced as one of the components Initiatives of Excellent Performance in 2011. The initiative's goal was to increase the reliability of both people and facilities, in response to 3 major areas of improvement - Need for improvement of the results, Troubleshooting support, Supporting the achievement of the company's goals. The human agent's reliability program is in practice included: - Tools to prevent human error; - Managerial observation and coaching; - Human factor analysis; -Quick information about the event with a human agent; -Human reliability timeline and performance indicators; - Basic, periodic and extraordinary training in human factor reliability(authors)
Wang, W.; Sui, P.; Wu, Y.T.
Conventional reliability-based design optimization methods treats the reliability function as an ordinary function and applies existing mathematical programming techniques to solve the design problem. As a result, the conventional approach requires nested loops with respect to g-function, and is very time consuming. A new reliability-based design method is proposed in this paper that deals with the g-function directly instead of the reliability function. This approach has the potential of significantly reducing the number of calls for g-function calculations since it requires only one full reliability analysis in a design iteration. A cam roller system in a typical high pressure fuel injection diesel engine is designed using both the proposed and the conventional approach. The proposed method is much more efficient for this application
Full Text Available Information abounds in all fields of the real life, which is often recorded as digital data in computer systems and treated as a kind of increasingly important resource. Its increasing volume growth causes great difficulties in both storage and analysis. The massive data storage in cloud environments has significant impacts on the quality of service (QoS of the systems, which is becoming an increasingly challenging problem. In this paper, we propose a multiobjective optimization model for the reliable data storage in clouds through considering both cost and reliability of the storage service simultaneously. In the proposed model, the total cost is analyzed to be composed of storage space occupation cost, data migration cost, and communication cost. According to the analysis of the storage process, the transmission reliability, equipment stability, and software reliability are taken into account in the storage reliability evaluation. To solve the proposed multiobjective model, a Constrained Multiobjective Particle Swarm Optimization (CMPSO algorithm is designed. At last, experiments are designed to validate the proposed model and its solution PSO algorithm. In the experiments, the proposed model is tested in cooperation with 3 storage strategies. Experimental results show that the proposed model is positive and effective. The experimental results also demonstrate that the proposed model can perform much better in alliance with proper file splitting methods.
Nowadays, power electronic equipment requirements are important, concerning performances, quality and reliability. On the other hand, costs have to be reduced in order to satisfy the market rules. To provide cheap, reliability and performances, many standard components with mass production are developed. But the construction of specific products must be considered following these two different points: in one band you can produce specific components, with delay, over-cost problems and eventuality quality and reliability problems, in the other and you can use standard components in a adapted topologies. The CEA of Pierrelatte has adopted this last technique of power electronic conception for the development of these high voltage pulsed power converters. The technique consists in using standard components and to associate them in series and in parallel. The matrix constitutes high voltage macro-switch where electrical parameters are distributed between the synchronized components. This study deals with the reliability of these structures. It brings up the high reliability aspect of MOSFETs matrix associations. Thanks to several homemade test facilities, we obtained lots of data concerning the components we use. The understanding of defects propagation mechanisms in matrix structures has allowed us to put forwards the necessity of robust drive system, adapted clamping voltage protection, and careful geometrical construction. All these reliability considerations in matrix associations have notably allowed the construction of a new matrix structure regrouping all solutions insuring reliability. Reliable and robust, this product has already reaches the industrial stage. (author) [fr
The presentation is organized around three themes: (1) The decrease of reception equipment costs allows non-Remote Sensing organization to access a technology until recently reserved to scientific elite. What this means is the rise of 'operational' executive agencies considering space-based technology and operations as a viable input to their daily tasks. This is possible thanks to totally dedicated ground receiving entities focusing on one application for themselves, rather than serving a vast community of users. (2) The multiplication of earth observation platforms will form the base for reliable technical and financial solutions. One obstacle to the growth of the earth observation industry is the variety of policies (commercial versus non-commercial) ruling the distribution of the data and value-added products. In particular, the high volume of data sales required for the return on investment does conflict with traditional low-volume data use for most applications. Constant access to data sources supposes monitoring needs as well as technical proficiency. (3) Large volume use of data coupled with low- cost equipment costs is only possible when the technology has proven reliable, in terms of application results, financial risks and data supply. Each of these factors is reviewed. The expectation is that international cooperation between agencies and private ventures will pave the way for future business models. As an illustration, the presentation proposes to use some recent non-traditional monitoring applications, that may lead to significant use of earth observation data, value added products and services: flood monitoring, ship detection, marine oil pollution deterrent systems and rice acreage monitoring.
Verma, Ajit Kumar; Karanki, Durga Rao
Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...
Dougherty, E.M.; Fragola, J.R.
The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach
Roca, Jose L.
Reliability techniques have been developed subsequently as a need of the diverse engineering disciplines, nevertheless they are not few those that think they have been work a lot on reliability before the same word was used in the current context. Military, space and nuclear industries were the first ones that have been involved in this topic, however not only in these environments it is that it has been carried out this small great revolution in benefit of the increase of the reliability figures of the products of those industries, but rather it has extended to the whole industry. The fact of the massive production, characteristic of the current industries, drove four decades ago, to the fall of the reliability of its products, on one hand, because the massively itself and, for other, to the recently discovered and even not stabilized industrial techniques. Industry should be changed according to those two new requirements, creating products of medium complexity and assuring an enough reliability appropriated to production costs and controls. Reliability began to be integral part of the manufactured product. Facing this philosophy, the book describes reliability techniques applied to electronics systems and provides a coherent and rigorous framework for these diverse activities providing a unifying scientific basis for the entire subject. It consists of eight chapters plus a lot of statistical tables and an extensive annotated bibliography. Chapters embrace the following topics: 1- Introduction to Reliability; 2- Basic Mathematical Concepts; 3- Catastrophic Failure Models; 4-Parametric Failure Models; 5- Systems Reliability; 6- Reliability in Design and Project; 7- Reliability Tests; 8- Software Reliability. This book is in Spanish language and has a potentially diverse audience as a text book from academic to industrial courses. (author)
Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...... to one third of the total cost of energy. Reduction of Operation & Maintenance costs will result in significant cost savings and result in cheaper electricity production. Operation & Maintenance processes mainly involve actions related to replacements or repair. Identifying the right times when...
Hall, R.E.; Boccio, J.L.
Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant
Cao, Yu; Wirth, Gilson
This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units. The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.
Li Pengcheng; Chen Guohua; Zhang Li; Dai Licao
Human reliability analysis (HRA) methods are reviewed. The theoretical basis of human reliability analysis, human error mechanism, the key elements of HRA methods as well as the existing HRA methods are respectively introduced and assessed. Their shortcomings,the current research hotspot and difficult problems are identified. Finally, it takes a close look at the trends of human reliability analysis methods. (authors)
Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.
Harms, E. Jr.
This paper reports on the reliability of the Fermilab Antiproton source since it began operation in 1985. Reliability of the complex as a whole as well as subsystem performance is summarized. Also discussed is the trending done to determine causes of significant machine downtime and actions taken to reduce the incidence of failure. Finally, results of a study to detect previously unidentified reliability limitations are presented
Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.
Silva Monroy, Cesar Augusto; Loose, Verne William
This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.
Bloch, Heinz P
This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.
Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.
This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific
Gintautas, Tomas; Sørensen, John Dalsgaard
Specific targets: 1) The report shall describe the state of the art of reliability and risk-based assessment of wind turbine components. 2) Development of methodology for reliability and risk-based assessment of the wind turbine at system level. 3) Describe quantitative and qualitative measures...
Alstrøm, Preben; Beierholm, Ulrik; Nielsen, Carsten Dahl
The reliability with which a neuron is able to create the same firing pattern when presented with the same stimulus is of critical importance to the understanding of neuronal information processing. We show that reliability is closely related to the process of phaselocking. Experimental results f...
Chatzis, Sotirios P; Andreou, Andreas S
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
Yoshii, Hatsumi; Mandai, Nozomu; Saito, Hidemitsu; Akazawa, Kouhei
Self-stigma, defined by a negative attitude toward oneself combined with the consciousness of being a target of prejudice, is a critical problem for psychiatric patients. Self-stigma studies among psychiatric patients have indicated that high stigma is predictive of detrimental effects such as the delay of treatment and decreases in social participation in patients, and levels of self-stigma should be statistically evaluated. In this study, we developed the Workplace Social Distance Scale (WSDS), rephrasing the eight items of the Japanese version of the Social Distance Scale (SDSJ) to apply to the work setting in Japan. We examined the reliability and validity of the WSDS among 83 psychiatric patients. Factor analysis extracted three factors from the scale items: "work relations," "shallow relationships," and "employment." These factors are similar to the assessment factors of the SDSJ. Cronbach's alpha coefficient for the WSDS was 0.753. The split-half reliability for the WSDS was 0.801, indicating significant correlations. In addition, the WSDS was significantly correlated with the SDSJ. These findings suggest that the WSDS represents an approximation of self-stigma in the workplace among psychiatric patients. Our study assessed the reliability and validity of the WSDS for measuring self-stigma in Japan. Future studies should investigate the reliability and validity of the scale in other countries.
The main problems are briefly discussed associated with the assessment of the safety and reliability of reactor pressure vessels. Two approaches are being applied to the assessment: one is based on the crack arrest temperature, the other on the determination of conditions corresponding to brittle fracture formation and on the determination of the critical defect size. The importance is stressed of continuous in-service inspection which may increase the factor of reliability by up to 10 4 times. (Z.M.)
The first book to specifically focus on offshore wind turbine technology and which addresses practically wind turbine reliability and availability. The book draws on the author's experience of power generation reliability and availability and the condition monitoring of that plant to describe the problems facing the developers of offshore wind farms and the solutions available to them to raise availability, reduce cost of energy and improve through life cost.
Full Text Available The military networks, contrary to commercial ones, require standards which provide the highest level of security and reliability. The process to assuring redundancy of the main connections through applying various protocols and transmission media causes problem with time needed to re-establish virtual tunnels between different locations in case of damaged link. This article compares reliability of different IP (Internet Protocol tunnels, which were implemented on military network devices.
This book presents the state-of-the-art in quality and reliability engineering from a product life cycle standpoint. Topics in reliability include reliability models, life data analysis and modeling, design for reliability and accelerated life testing, while topics in quality include design for quality, acceptance sampling and supplier selection, statistical process control, production tests such as screening and burn-in, warranty and maintenance. The book provides comprehensive insights into two closely related subjects, and includes a wealth of examples and problems to enhance reader comprehension and link theory and practice. All numerical examples can be easily solved using Microsoft Excel. The book is intended for senior undergraduate and post-graduate students in related engineering and management programs such as mechanical engineering, manufacturing engineering, industrial engineering and engineering management programs, as well as for researchers and engineers in the quality and reliability fields. D...
Ehsani, A.; Ranjbar, A. M.; Fotuhi Firuzabad, M.; Ehsani, M.
Recently, in many countries, electric utility industry is undergoing considerable changes in regard to its structure and regulation. It can be clearly seen that the thrust towards privatization and deregulation or re regulation of the electric utility industry will introduce numerous reliability problems that will require new criteria and analytical tools that recognize the residual uncertainties in the new environment. In this paper, different risks and uncertainties in competitive electricity markets are briefly introduced; the approach of customers, operators, planners, generation bodies and network providers to the reliability of deregulated system is studied; the impact of dispersed generation on system reliability is evaluated; and finally, the reliability cost/reliability worth issues in the new competitive environment are considered
Liu, Zhunga; Pan, Quan; Dezert, Jean; Han, Jun-Wei; He, You
Classifier fusion is an efficient strategy to improve the classification performance for the complex pattern recognition problem. In practice, the multiple classifiers to combine can have different reliabilities and the proper reliability evaluation plays an important role in the fusion process for getting the best classification performance. We propose a new method for classifier fusion with contextual reliability evaluation (CF-CRE) based on inner reliability and relative reliability concepts. The inner reliability, represented by a matrix, characterizes the probability of the object belonging to one class when it is classified to another class. The elements of this matrix are estimated from the -nearest neighbors of the object. A cautious discounting rule is developed under belief functions framework to revise the classification result according to the inner reliability. The relative reliability is evaluated based on a new incompatibility measure which allows to reduce the level of conflict between the classifiers by applying the classical evidence discounting rule to each classifier before their combination. The inner reliability and relative reliability capture different aspects of the classification reliability. The discounted classification results are combined with Dempster-Shafer's rule for the final class decision making support. The performance of CF-CRE have been evaluated and compared with those of main classical fusion methods using real data sets. The experimental results show that CF-CRE can produce substantially higher accuracy than other fusion methods in general. Moreover, CF-CRE is robust to the changes of the number of nearest neighbors chosen for estimating the reliability matrix, which is appealing for the applications.
Buden, D.; Hunt, R.N.M.
Improved design techniques are needed to achieve high reliability at minimum cost. This is especially true of space systems where lifetimes of many years without maintenance are needed and severe mass limitations exist. Reliability must be designed into these systems from the start. Techniques are now being explored to structure a formal design process that will be more complete and less expensive. The intent is to integrate the best features of design, reliability analysis, and expert systems to design highly reliable systems to meet stressing needs. Taken into account are the large uncertainties that exist in materials, design models, and fabrication techniques. Expert systems are a convenient method to integrate into the design process a complete definition of all elements that should be considered and an opportunity to integrate the design process with reliability, safety, test engineering, maintenance and operator training. 1 fig
Skaler, F.; Djetelic, N.
Operation that is safe, reliable, effective and acceptable to public is the common message in a mission statement of commercial nuclear power plants (NPPs). To fulfill these goals, nuclear industry, among other areas, has to focus on: 1 Human Performance (HU) and 2 Equipment Reliability (EQ). The performance objective of HU is as follows: The behaviors of all personnel result in safe and reliable station operation. While unwanted human behaviors in operations mostly result directly in the event, the behavior flaws either in the area of maintenance or engineering usually cause decreased equipment reliability. Unsatisfied Human performance leads even the best designed power plants into significant operating events, which can be found as well-known examples in nuclear industry. Equipment reliability is today recognized as the key to success. While the human performance at most NPPs has been improving since the start of WANO / INPO / IAEA evaluations, the open energy market has forced the nuclear plants to reduce production costs and operate more reliably and effectively. The balance between these two (opposite) goals has made equipment reliability even more important for safe, reliable and efficient production. Insisting on on-line operation by ignoring some principles of safety could nowadays in a well-developed safety culture and human performance environment exceed the cost of electricity losses. In last decade the leading USA nuclear companies put a lot of effort to improve equipment reliability primarily based on INPO Equipment Reliability Program AP-913 at their NPP stations. The Equipment Reliability Program is the key program not only for safe and reliable operation, but also for the Life Cycle Management and Aging Management on the way to the nuclear power plant life extension. The purpose of Equipment Reliability process is to identify, organize, integrate and coordinate equipment reliability activities (preventive and predictive maintenance, maintenance
Shinozuka, M.; Kako, T.; Hwang, H.; Brown, P.; Reich, M.
For the overall safety evaluation of seismic category I structures subjected to various load combinations, a quantitative measure of the structural reliability in terms of a limit state probability can be conveniently used. For this purpose, the reliability analysis method for dynamic loads, which has recently been developed by the authors, was combined with the existing standard reliability analysis procedure for static and quasi-static loads. The significant parameters that enter into the analysis are: the rate at which each load (dead load, accidental internal pressure, earthquake, etc.) will occur, its duration and intensity. All these parameters are basically random variables for most of the loads to be considered. For dynamic loads, the overall intensity is usually characterized not only by their dynamic components but also by their static components. The structure considered in the present paper is a reinforced concrete containment structure subjected to various static and dynamic loads such as dead loads, accidental pressure, earthquake acceleration, etc. Computations are performed to evaluate the limit state probabilities under each load combination separately and also under all possible combinations of such loads. Indeed, depending on the limit state condition to be specified, these limit state probabilities can indicate which particular load combination provides the dominant contribution to the overall limit state probability. On the other hand, some of the load combinations contribute very little to the overall limit state probability. These observations provide insight into the complex problem of which load combinations must be considered for design, for which limit states and at what level of limit state probabilities. (orig.)
Li, Hong Shuang
To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.
STATEMENT A. Atoyed for public releosel Oo . Distibution Unlimited JULY 1983 ORC 83-5 This research was supported by the Air Force Office of Scientific ...cONTso.oSoo oaoCC NaME ara AOoM It. REPORT DATE United States Air Force July 1983 Air Force Office of Scientific Research IL NUNeen or s Boiling Air Force Base...One node in K is designated the root and the reliability problem is to calculate the probability that the root can comunicate with the remaining K C V
This paper presents the use of smart reclosers for improving reliability of a distribution system of one of the major cities of Ethiopia. As frequent power interruptions are posing a huge problem to the life of the people and the economy, finding a solution to the problem is very essential. Electric reliability has affected social well ...
The author reviews recent progress in uranium chemistry achieved in CEA laboratories. Like its neighbors in the Mendeleev chart uranium undergoes hydrolysis, oxidation and disproportionation reactions which make the chemistry of these species in water highly complex. The study of the chemistry of uranium in an anhydrous medium has led to correlate the structural and electronic differences observed in the interaction of uranium(III) and the lanthanides(III) with nitrogen or sulfur molecules and the effectiveness of these molecules in An(III)/Ln(III) separation via liquid-liquid extraction. Recent work on the redox reactivity of trivalent uranium U(III) in an organic medium with molecules such as water or an azide ion (N 3 - ) in stoichiometric quantities, led to extremely interesting uranium aggregates particular those involved in actinide migration in the environment or in aggregation problems in the fuel processing cycle. Another significant advance was the discovery of a compound containing the uranyl ion with a degree of oxidation (V) UO 2 + , obtained by oxidation of uranium(III). Recently chemists have succeeded in blocking the disproportionation reaction of uranyl(V) and in stabilizing polymetallic complexes of uranyl(V), opening the way to to a systematic study of the reactivity and the electronic and magnetic properties of uranyl(V) compounds. (A.C.)
Engr. Anumaka; Michael Chukwukadibia
Today, the electric power system consists of complex interconnected network which are prone to different problems that militates against the reliability of the power system. Inadequate reliability in the power system causes problems such as high failure rate of power system installations and consumer equipment, transient and intransient faults, symmetrical faults etc. This paper provides an extensive review of the powers system and equipment reliability and related failure patterns in equipment.
Enevoldsen, I.; Sørensen, John Dalsgaard
Reliability-based design of structural systems is considered. In particular, systems where the reliability model is a series system of parallel systems are treated. A sensitivity analysis for this class of problems is presented. Optimization problems with series systems of parallel systems...... optimization of series systems of parallel systems, but it is also efficient in reliability-based optimization of series systems in general....
Behind the political statements made about the transformer event at the Kruemmel nuclear power station (KKK) in the summer of 2009 there are fundamental issues of atomic law. Pursuant to Articles 20 and 28 of its Basic Law, Germany is a state in which the rule of law applies. Consequently, the aspects of atomic law associated with the incident merit a closer look, all the more so as the items concerned have been known for many years. Important aspects in the debate about the Kruemmel nuclear power plant are the fact that the transformer is considered part of the nuclear power station under atomic law and thus a ''plant'' subject to surveillance by the nuclear regulatory agencies, on the one hand, and the reliability under atomic law of the operator and the executive personnel responsible, on the other hand. Both ''plant'' and ''reliability'' are terms focusing on nuclear safety. Hence the question to what extent safety was affected in the Kruemmel incident. The classification of the event as 0 = no or only a very slight safety impact on the INES scale (INES = International Nuclear Event Scale) should not be used to put aside the safety issue once and for all. Points of fact and their technical significance must be considered prior to any legal assessment. Legal assessments and regulations are associated with facts and circumstances. Any legal examination is based on the facts as determined and elucidated. Any other procedure would be tantamount to an inadmissible legal advance conviction. Now, what is the position of political statements, i.e. political assessments and political responsibility? If everything is done the correct way, they come at the end, after exploration of the facts and evaluation under applicable law. Sometimes things are handled differently, with consequences which are not very helpful. In the light of the provisions about the rule of law as laid down in the Basic Law, the new federal government should be made to observe the proper sequence of
Jones, Harry W.
A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.
One can also speak of reliability with respect to materials. While for reliability of components the MTBF (mean time between failures) is regarded as the main criterium, this is replaced with regard to materials by possible failure mechanisms like physical/chemical reaction mechanisms, disturbances of physical or chemical equilibrium, or other interactions or changes of system. The main tasks of the reliability analysis of materials therefore is the prediction of the various failure reasons, the identification of interactions, and the development of nondestructive testing methods. (RW) [de
Ditlevsen, Ove Dalager; Madsen, H. O.
The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...... of the uncertainties and their interplay is the developed, step-by-step. The concepts presented are illustrated by numerous examples throughout the text....
What makes the safety problem of nuclear reactors particularly challenging is the demand for high levels of reliability and the limitation of statistical information. The latter is an unfortunate circumstance, which forces deductive theories of reliability to use models and parameter values with weak factual support. The uncertainty about probabilistic models and parameters which are inferred from limited statistical evidence can be quantified and incorporated rationally into inductive theories of reliability. In such theories, the starting point is the information actually available, as opposed to an estimated probabilistic model. But, while the necessity of introducing inductive uncertainty into reliability theories has been recognized by many authors, no satisfactory inductive theory is presently available. The paper presents: a classification of uncertainties and of reliability models for reactor safety; a general methodology to include these uncertainties into reliability analysis; a discussion about the relative advantages and the limitations of various reliability theories (specifically, of inductive and deductive, parametric and nonparametric, second-moment and full-distribution theories). For example, it is shown that second-moment theories, which were originally suggested to cope with the scarcity of data, and which have been proposed recently for the safety analysis of secondary containment vessels, are the least capable of incorporating statistical uncertainty. The focus is on reliability models for external threats (seismic accelerations and tornadoes). As an application example, the effect of statistical uncertainty on seismic risk is studied using parametric full-distribution models
Green, A.E.; Bourne, A.J.
Experience has shown that reliability assessments can play an important role in the early design and subsequent operation of technological systems where reliability is at a premium. The approaches to and techniques for such assessments, which have been outlined in the paper, have been successfully applied in variety of applications ranging from individual equipments to large and complex systems. The general approach involves the logical and systematic establishment of the purpose, performance requirements and reliability criteria of systems. This is followed by an appraisal of likely system achievment based on the understanding of different types of variational behavior. A fundamental reliability model emerges from the correlation between the appropriate Q and H functions for performance requirement and achievement. This model may cover the complete spectrum of performance behavior in all the system dimensions
A methodology is presented in this paper to evaluate the time-dependent system reliability of a pipeline segment that contains multiple active corrosion defects and is subjected to stochastic internal pressure loading. The pipeline segment is modeled as a series system with three distinctive failure modes due to corrosion, namely small leak, large leak and rupture. The internal pressure is characterized as a simple discrete stochastic process that consists of a sequence of independent and identically distributed random variables each acting over a period of one year. The magnitude of a given sequence follows the annual maximum pressure distribution. The methodology is illustrated through a hypothetical example. Furthermore, the impact of the spatial variability of the pressure loading and pipe resistances associated with different defects on the system reliability is investigated. The analysis results suggest that the spatial variability of pipe properties has a negligible impact on the system reliability. On the other hand, the spatial variability of the internal pressure, initial defect sizes and defect growth rates can have a significant impact on the system reliability.
Solomon David J
Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.
For an exact evaluation of the reliability of a structure it appears necessary to determine the distribution densities of the loads and resistances and to calculate the correlation coefficients between loads and between resistances. These statistical characteristics can be obtained only on the basis of a long activity period. In case that such studies are missing the statistical properties formulated here give upper and lower bounds of the reliability. (orig./HP) [de
Several communications in this conference are concerned with nuclear plant reliability and maintainability; their titles are: maintenance optimization of stand-by Diesels of 900 MW nuclear power plants; CLAIRE: an event-based simulation tool for software testing; reliability as one important issue within the periodic safety review of nuclear power plants; design of nuclear building ventilation by the means of functional analysis; operation characteristic analysis for a power industry plant park, as a function of influence parameters
Bento, J.P.; Boerje, S.; Ericsson, G.; Hasler, A.; Lyden, C.O.; Wallin, L.; Poern, K.; Aakerlund, O.
The main objective for the report is to improve failure data for reliability calculations as parts of safety analyses for Swedish nuclear power plants. The work is based primarily on evaluations of failure reports as well as information provided by the operation and maintenance staff of each plant. In the report are presented charts of reliability data for: pumps, valves, control rods/rod drives, electrical components, and instruments. (L.E.)
Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou
A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.
Jeong, Hae Seong; Park, Dong Ho; Kim, Jae Ju
This book tells of analysis and application of reliability, which includes definition, importance and historical background of reliability, function of reliability and failure rate, life distribution and assumption of reliability, reliability of unrepaired system, reliability of repairable system, sampling test of reliability, failure analysis like failure analysis by FEMA and FTA, and cases, accelerated life testing such as basic conception, acceleration and acceleration factor, and analysis of accelerated life testing data, maintenance policy about alternation and inspection.
Wang, Wei; Feng, Weijia; Zhang, Wei; Li, Yuan
At present, many equipment development companies have been aware of the great significance of reliability of the equipment development. But, due to the lack of effective management evaluation method, it is very difficult for the equipment development company to manage its own reliability work. Evaluation method of equipment reliability configuration management is to determine the reliability management capabilities of equipment development company. Reliability is not only designed, but also managed to achieve. This paper evaluates the reliability management capabilities by reliability configuration capability maturity model(RCM-CMM) evaluation method.
Shabot, M Michael; Monroe, Douglas; Inurria, Juan; Garbade, Debbi; France, Anne-Claire
In 2006 the Memorial Hermann Health System (MHHS), which includes 12 hospitals, began applying principles embraced by high reliability organizations (HROs). Three factors support its HRO journey: (1) aligned organizational structure with transparent management systems and compressed reporting processes; (2) Robust Process Improvement (RPI) with high-reliability interventions; and (3) cultural establishment, sustainment, and evolution. The Quality and Safety strategic plan contains three domains, each with a specific set of measures that provide goals for performance: (1) "Clinical Excellence;" (2) "Do No Harm;" and (3) "Saving Lives," as measured by the Serious Safety Event rate. MHHS uses a uniform approach to performance improvement--RPI, which includes Six Sigma, Lean, and change management, to solve difficult safety and quality problems. The 9 acute care hospitals provide multiple opportunities to integrate high-reliability interventions and best practices across MHHS. For example, MHHS partnered with the Joint Commission Center for Transforming Healthcare in its inaugural project to establish reliable hand hygiene behaviors, which improved MHHS's average hand hygiene compliance rate from 44% to 92% currently. Soon after compliance exceeded 85% at all 12 hospitals, the average rate of central line-associated bloodstream and ventilator-associated pneumonias decreased to essentially zero. MHHS's size and diversity require a disciplined approach to performance improvement and systemwide achievement of measurable success. The most significant cultural change at MHHS has been the expectation for 100% compliance with evidence-based quality measures and 0% incidence of patient harm.
Brain-computer interfaces represent one of the most astonishing technologies in our era. However, the grand challenge of chronic instability and limited throughput of the electrode-tissue interface has significantly hindered the further development and ultimate deployment of such exciting technologies. A multidisciplinary research workforce has been called upon to respond to this engineering need. In this paper, I briefly review this multidisciplinary pursuit of chronically reliable neural interfaces from a materials perspective by analyzing the problem, abstracting the engineering principles, and summarizing the corresponding engineering strategies. I further draw my future perspectives by extending the proposed engineering principles.
Dolganov, Andrey; Kagan, Pavel
High-rise buildings have a specificity, which significantly distinguishes them from traditional buildings of high-rise and multi-storey buildings. Steel structures in high-rise buildings are advisable to be used in earthquake-proof regions, since steel, due to its plasticity, provides damping of the kinetic energy of seismic impacts. These aspects should be taken into account when choosing a structural scheme of a high-rise building and designing load-bearing structures. Currently, modern regulatory documents do not quantify the reliability of structures. Although the problem of assigning an optimal level of reliability has existed for a long time. The article shows the possibility of designing metal structures of high-rise buildings with specified reliability. Currently, modern regulatory documents do not quantify the reliability of high-rise buildings. Although the problem of assigning an optimal level of reliability has existed for a long time. It is proposed to establish the value of reliability 0.99865 (3σ) for constructions of buildings and structures of a normal level of responsibility in calculations for the first group of limiting states. For increased (construction of high-rise buildings) and reduced levels of responsibility for the provision of load-bearing capacity, it is proposed to assign respectively 0.99997 (4σ) and 0.97725 (2σ). The coefficients of the use of the cross section of a metal beam for different levels of security are given.
Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.
Cho, N.Z.; Papazoglou, I.A.; Bari, R.A.
This report describes a methodology for reliability and risk allocation in nuclear power plants. The work investigates the technical feasibility of allocating reliability and risk, which are expressed in a set of global safety criteria and which may not necessarily be rigid, to various reactor systems, subsystems, components, operations, and structures in a consistent manner. The report also provides general discussions on the problem of reliability and risk allocation. The problem is formulated as a multiattribute decision analysis paradigm. The work mainly addresses the first two steps of a typical decision analysis, i.e., (1) identifying alternatives, and (2) generating information on outcomes of the alternatives, by performing a multiobjective optimization on a PRA model and reliability cost functions. The multiobjective optimization serves as the guiding principle to reliability and risk allocation. The concept of ''noninferiority'' is used in the multiobjective optimization problem. Finding the noninferior solution set is the main theme of the current approach. The final step of decision analysis, i.e., assessment of the decision maker's preferences could then be performed more easily on the noninferior solution set. Some results of the methodology applications to a nontrivial risk model are provided, and several outstanding issues such as generic allocation, preference assessment, and uncertainty are discussed. 29 refs., 44 figs., 39 tabs
Kucharavy , Dmitry; De Guio , Roland
International audience; The ability to foresee future technology is a key task of Innovative Design. The paper focuses on the obstacles to reliable prediction of technological evolution for the purpose of Innovative Design. First, a brief analysis of problems for existing forecasting methods is presented. The causes for the complexity of technology prediction are discussed in the context of reduction of the forecast errors. Second, using a contradiction analysis, a set of problems related to ...
Computers are being increasingly integrated into the control and safety systems of large and potentially hazardous industrial processes. This development introduces problems which are particular to computer systems and opens the way to new techniques of solving conventional reliability and availability problems. References to the developing fields of software reliability, human factors and software design are given, and these subjects are related, where possible, to the quantified assessment of reliability. Original material is presented in the areas of reliability growth and computer hardware failure data. The report draws on the experience of the National Centre of Systems Reliability in assessing the capability and reliability of computer systems both within the nuclear industry, and from the work carried out in other industries by the Systems Reliability Service. (author)
The overall objective of this research project is to develop a technical basis for flexible piping designs which will improve piping reliability and minimize the use of pipe supports, snubbers, and pipe whip restraints. The current study was conducted to establish the necessary groundwork based on the piping reliability analysis. A confirmatory piping reliability assessment indicated that removing rigid supports and snubbers tends to either improve or affect very little the piping reliability. The authors then investigated a couple of changes to be implemented in Regulatory Guide (RG) 1.61 and RG 1.122 aimed at more flexible piping design. They concluded that these changes substantially reduce calculated piping responses and allow piping redesigns with significant reduction in number of supports and snubbers without violating ASME code requirements. Furthermore, the more flexible piping redesigns are capable of exhibiting reliability levels equal to or higher than the original stiffer design. An investigation of the malfunction of pipe whip restraints confirmed that the malfunction introduced higher thermal stresses and tended to reduce the overall piping reliability. Finally, support and component reliabilities were evaluated based on available fragility data. Results indicated that the support reliability usually exhibits a moderate decrease as the piping flexibility increases. Most on-line pumps and valves showed an insignificant reduction in reliability for a more flexible piping design
Laviron, A.; Berard, C.; Quenee, R.
Reliability studies of nuclear power plant safety functions have, up to now, required the use of large computers. As they are of universal use, these big machines are not very well adapted to deal with reliability problems at low cost. ESCAF has been developed to be substituted for large computers in order to save time and money. ESCAF is a small electronic device which can be used in connection with a minicomputer. It allows to perform complex system reliability analysis (qualitative and quantitative) and to study critical element influences such as common cause failures. In this paper, the device is described and its features and abilities are outlined: easy to implement, swift running, low working cost. Its application range concerns all cases when a good reliability is needed
Solder joints are ubiquitous in electronic consumer products. The European Union has a directive to ban the use of Pb-based solders in these products on July 1st, 2006. There is an urgent need for an increase in the research and development of Pb-free solders in electronic manufacturing. For example, spontaneous Sn whisker growth and electromigration induced failure in solder joints are serious issues. These reliability issues are quite complicated due to the combined effect of electrical, mechanical, chemical, and thermal forces on solder joints. To improve solder joint reliability, the science of solder joint behavior under various driving forces must be understood. In this book, the advanced materials reliability issues related to copper-tin reaction and electromigration in solder joints are emphasized and methods to prevent these reliability problems are discussed.
Freight transportation provides a significant contribution to our nations economy. A reliable and accessible freight network enables business in the Twin Cities to be more competitive in the Upper Midwest region. Accurate and reliable freight data...
Drost, Ellen A.
In this paper, the author aims to provide novice researchers with an understanding of the general problem of validity in social science research and to acquaint them with approaches to developing strong support for the validity of their research. She provides insight into these two important concepts, namely (1) validity; and (2) reliability, and…
Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard
Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces...
Kammerer, Catherine C.
Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.
... often, it could be a sign of a balance problem. Balance problems can make you feel unsteady. You may ... related injuries, such as a hip fracture. Some balance problems are due to problems in the inner ...
Lucas, Nicholas; Macaskill, Petra; Irwig, Les; Moran, Robert; Bogduk, Nikolai
Trigger points are promoted as an important cause of musculoskeletal pain. There is no accepted reference standard for the diagnosis of trigger points, and data on the reliability of physical examination for trigger points are conflicting. To systematically review the literature on the reliability of physical examination for the diagnosis of trigger points. MEDLINE, EMBASE, and other sources were searched for articles reporting the reliability of physical examination for trigger points. Included studies were evaluated for their quality and applicability, and reliability estimates were extracted and reported. Nine studies were eligible for inclusion. None satisfied all quality and applicability criteria. No study specifically reported reliability for the identification of the location of active trigger points in the muscles of symptomatic participants. Reliability estimates varied widely for each diagnostic sign, for each muscle, and across each study. Reliability estimates were generally higher for subjective signs such as tenderness (kappa range, 0.22-1.0) and pain reproduction (kappa range, 0.57-1.00), and lower for objective signs such as the taut band (kappa range, -0.08-0.75) and local twitch response (kappa range, -0.05-0.57). No study to date has reported the reliability of trigger point diagnosis according to the currently proposed criteria. On the basis of the limited number of studies available, and significant problems with their design, reporting, statistical integrity, and clinical applicability, physical examination cannot currently be recommended as a reliable test for the diagnosis of trigger points. The reliability of trigger point diagnosis needs to be further investigated with studies of high quality that use current diagnostic criteria in clinically relevant patients.
Paap, Kenneth R; Sawi, Oliver
Studies testing for individual or group differences in executive functioning can be compromised by unknown test-retest reliability. Test-retest reliabilities across an interval of about one week were obtained from performance in the antisaccade, flanker, Simon, and color-shape switching tasks. There is a general trade-off between the greater reliability of single mean RT measures, and the greater process purity of measures based on contrasts between mean RTs in two conditions. The individual differences in RT model recently developed by Miller and Ulrich was used to evaluate the trade-off. Test-retest reliability was statistically significant for 11 of the 12 measures, but was of moderate size, at best, for the difference scores. The test-retest reliabilities for the Simon and flanker interference scores were lower than those for switching costs. Standard practice evaluates the reliability of executive-functioning measures using split-half methods based on data obtained in a single day. Our test-retest measures of reliability are lower, especially for difference scores. These reliability measures must also take into account possible day effects that classical test theory assumes do not occur. Measures based on single mean RTs tend to have acceptable levels of reliability and convergent validity, but are "impure" measures of specific executive functions. The individual differences in RT model shows that the impurity problem is worse than typically assumed. However, the "purer" measures based on difference scores have low convergent validity that is partly caused by deficiencies in test-retest reliability. Copyright © 2016 Elsevier B.V. All rights reserved.
Taira, Ricky K.; Chan, Kelby K.; Stewart, Brent K.; Weinberg, Wolfram S.
Reliability is an increasing concern when moving PACS from the experimental laboratory to the clinical environment. Any system downtime may seriously affect patient care. The authors report on the several classes of errors encountered during the pre-clinical release of the PACS during the past several months and present the solutions implemented to handle them. The reliability issues discussed include: (1) environmental precautions, (2) database backups, (3) monitor routines of critical resources and processes, (4) hardware redundancy (networks, archives), and (5) development of a PACS quality control program.
226-30, October 1974. 66 I, 26. "Reliability of GAAS Injection Lasers", De Loach , B. C., Jr., 1973 IEEE/OSA Conference on Laser Engineering and...Vol. R-23, No. 4, 226-30, October 1974. 28. "Reliability of GAAS Injection Lasers", De Loach , B. C., Jr., 1973 IEEE/OSA Conference on Laser...opnatien ot deg C, mounted on a 4-inach square 0.250~ inch thick al~loy alum~nusi panel.. This mounting technique should be L~ ken into cunoidur~tiou
The development of transportation has been a significant factor in the development of civilisation as a whole. Our technical ability to move people and goods now seems virtually limitless when one considers for example the achievements of the various space programmes. Yet our current achievements rely heavily on high standards of safety and reliability from equipment and the human component of transportation systems. Recent failures have highlighted our dependence on equipment and human reliability. This book represents the proceedings of the 1989 Safety and Reliability Society symposium held at Bath on 11-12 October 1989. The structure of the book follows the structure of the symposium itself and the papers selected represent current thinking the the wide field of transportation, and the areas of rail (6 papers, three on railway signalling), air including space (two papers), road (one paper), road and rail (two papers) and sea (three papers) are covered. There are four papers concerned with general transport issues. Three papers concerned with the transport of radioactive materials are indexed separately. (author)
Lar, U A; Tejan, A B
This paper attempts to discuss the links between the geochemical composition of rocks and minerals and the geographical distribution of diseases in human beings in Nigeria. We know that the natural composition of elements in our environment (in the bedrock, soils, water, and vegetation) may be the major cause of enrichment or depletion in these elements and may become a direct risk to human health. Similarly, anthropogenic activities such as mining and mineral processes, industrial waste disposal, agriculture, etc., could distort the natural geochemical equilibrium of the environment. Thus, the enrichment or depletion of geochemical elements in the environment are controlled either by natural and/or anthropogenic processes. The increased ingestion of toxic trace elements such as As, Cd, Hg, Pb, and F, whether directly or indirectly, adversely affects human health. Of these, Cd has most dangerous long-term effect on human health. Environmental exposure to As and Hg is a causal factor in human carcinogenesis and numerous cancer health disorders. Available information on iodine deficiency disorder (IDD) in Nigeria indicates goiter prevalence rates of between 15% and 59% in several affected areas. There have been reported cases of dental fluorosis resulting from intake of water with fluoride content >1.5 ppm. Dental caries among children shows an overall prevalence rate of 39.9%. Within the Younger Granite province in central Nigeria, cases of cancer and miscarriages in pregnant women have been linked to natural radiation These examples and a number of others from the existing literature underscore the pressing need for the development of collaborative research to increase our understanding of the relationship between the geographical distribution of human and animal diseases in Nigeria and environmental factors. We submit that such knowledge is essential for the control and management of these diseases.
Feldt-Rasmussen, Ulla; Rasmussen, Ase Krogh
Coexistence of differentiated thyroid cancer (DTC) and thyroid autoimmune diseases could represent a mere coincidence due to the frequent occurrence of autoimmunity, but there may also be a pathological and causative link between the two conditions. The coincidence of DTC with Hashimoto's disease...... has been variably reported at between 0.5 and 22.5% and of DTC with Graves' disease between 0 and 9.8%. In this review available evidence for thyroid autoimmunity in DTC is summarized and it is concluded that thyroid cancer does coexist with thyroid autoimmunity, implying that patients treated...... TgAb measurements may be used as a surrogate marker for recurrence of thyroid cancer during the long-term monitoring of DTC patients....
Feldt-Rasmussen, Ulla; Rasmussen, Ase Krogh
Coexistence of differentiated thyroid cancer (DTC) and thyroid autoimmune diseases could represent a mere coincidence due to the frequent occurrence of autoimmunity, but there may also be a pathological and causative link between the two conditions. The coincidence of DTC with Hashimoto's disease...... has been variably reported at between 0.5 and 22.5% and of DTC with Graves' disease between 0 and 9.8%. In this review available evidence for thyroid autoimmunity in DTC is summarized and it is concluded that thyroid cancer does coexist with thyroid autoimmunity, implying that patients treated...... TgAb measurements may be used as a surrogate marker for recurrence of thyroid cancer during the long-term monitoring of DTC patients....
Henriksen, Britt I. F.; Møller, Steen Henrik
The objective of this study was to test the hypothesis that the body condition of the mink dam, the frequency of dirty nests, frequency of injuries and diarrhoea change significantly with the day of assessment, post-partum, within the data collection period from parturition to weaning, influencing...... feeding', but not by enough to affect the estimated welfare classification. The score for the three other measures also varied with date of assessment but not enough to affect the classification. However, the observed change in the four measures we focused on indicates that a change in the overall Wel......Fur classification can occur if these or other measures change a little more for the better or worse. Possible solutions to this could be reducing the time window for assessment, development of a valid correction factor or to stratify the visits into an early, middle and late visit on a farm within the three...
Overpopulation of developing countries in general, and Rwanda in particular, is not just their problem but a problem for developed countries as well. Rapid population growth is a key factor in the increase of poverty in sub-Saharan Africa. Population growth outstrips food production. Africa receives more and more foreign food, economic, and family planning aid each year. The Government of Rwanda encourages reduced population growth. Some people criticize it, but this criticism results in mortality and suffering. One must combat this ignorance, but attitudes change slowly. Some of these same people find the government's acceptance of family planning an invasion of their privacy. Others complain that rich countries do not have campaigns to reduce births, so why should Rwanda do so? The rate of schooling does not increase in Africa, even though the number of children in school increases, because of rapid population growth. Education is key to improvements in Africa's socioeconomic growth. Thus, Africa, is underpopulated in terms of potentiality but overpopulated in terms of reality, current conditions, and possibilities of overexploitation. Africa needs to invest in human resources. Families need to save, and to so, they must refrain from having many children. Africa should resist the temptation to waste, as rich countries do, and denounce it. Africa needs to become more independent of these countries, but structural adjustment plans, growing debt, and rapid population growth limit national independence. Food aid is a means for developed countries to dominate developing countries. Modernization through foreign aid has had some positive effects on developing countries (e.g., improved hygiene, mortality reduction), but these also sparked rapid population growth. Rwandan society is no longer traditional, but it is also not yet modern. A change in mentality to fewer births, better quality of life for living infants, better education, and less burden for women must occur
Full Text Available The article presents the problem of the influence of thermodynamic factors on human fallibility in different zones of thermal discomfort. Describes the processes of energy in the human body. Been given a formal description of the energy balance of the human body thermoregulation. Pointed to human reactions to temperature changes of internal and external environment, including reactions associated with exercise. The methodology to estimate and determine the reliability of indicators of human basal acting in different zones of thermal discomfort. The significant effect of thermodynamic factors on the reliability and security ofperson.
Wang, Wei; Xu, Jintong; Zhang, Yan; Li, Xiangyang
This paper concerns HgCdTe (MCT) infrared photoconductor detectors with high operating temperature. The near room temperature operation of detectors have advantages of light weight, less cost and convenient usage. Their performances are modest and they suffer from reliable problems. These detectors face with stability of the package, chip bonding area and passivation layers. It's important to evaluate and improve the reliability of such detectors. Defective detectors were studied with SEM(Scanning electron microscope) and microscopy. Statistically significant differences were observed between the influence of operating temperature and the influence of humidity. It was also found that humility has statistically significant influence upon the stability of the chip bonding and passivation layers, and the amount of humility isn't strongly correlated to the damage on the surface. Considering about the commonly found failures modes in detectors, special test structures were designed to improve the reliability of detectors. An accelerated life test was also implemented to estimate the lifetime of the high operating temperature MCT photoconductor detectors.
Oleinikova, I.; Krishans, Z.; Mutule, A.
The authors propose to select long-term solutions to the reliability problems of electrical networks in the stage of development planning. The guide lines or basic principles of such optimization are: 1) its dynamical nature; 2) development sustainability; 3) integrated solution of the problems of network development and electricity supply reliability; 4) consideration of information uncertainty; 5) concurrent consideration of the network and generation development problems; 6) application of specialized information technologies; 7) definition of requirements for independent electricity producers. In the article, the major aspects of liberalized electricity market, its functions and tasks are reviewed, with emphasis placed on the optimization of electrical network development as a significant component of sustainable management of power systems.
Longhurst, F.; Wessels, H.
Analyses carried out to ensure Columbus reliability, availability, and maintainability, and operational and design safety are summarized. Failure modes/effects/criticality is the main qualitative tool used. The main aspects studied are fault tolerance, hazard consequence control, risk minimization, human error effects, restorability, and safe-life design.
Defect detection and reproducibility of results are two separate but closely related subjects. It is axiomatic that a defect must be detected from examination to examination or reproducibility of results is very poor. On the other hand, a defect can be detected on each of subsequent examinations for higher reliability and still have poor reproducibility of results
For the next generation of high performance, high average luminosity colliders, the ''factories,'' reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis
Kasperski, M.; Geurts, C.P.W.
The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the
In the paper it is shown how upper and lower bounds for the reliability of plastic slabs can be determined. For the fundamental case it is shown that optimal bounds of a deterministic and a stochastic analysis are obtained on the basis of the same failure mechanisms and the same stress fields....
According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A
This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...
Stanley, Leanne M.; Edwards, Michael C.
The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…
Pahlevani, Peyman; Paramanathan, Achuthan; Hundebøll, Martin
The advantages of network coding have been extensively studied in the field of wireless networks. Integrating network coding with existing IEEE 802.11 MAC layer is a challenging problem. The IEEE 802.11 MAC does not provide any reliability mechanisms for overheard packets. This paper addresses...... this problem and suggests different mechanisms to support reliability as part of the MAC protocol. Analytical expressions to this problem are given to qualify the performance of the modified network coding. These expressions are confirmed by numerical result. While the suggested reliability mechanisms...
Mar 5, 2018 ... This paper presents a reliability analysis of such a system using reliability ... Keywords-compressor system, reliability, reliability block diagram, RBD .... the same structure has been kept with the three subsystems: air flow, oil flow and .... and Safety in Engineering Design", Springer, 2009.  P. O'Connor ...
Achmad, S.; Somantri, A.
PT. Badak's LNG sales commitment has been steadily increasing, therefore, there has been more emphasis to improve and maintain the LNG plant reliability. From plant operation historical records, Badak LNG plant experienced a high number of LNG process train trips and down time for 1977 through 1988. The highest annual number of LNG plant trips (50 times) occurred in 1983 and the longest LNG process train down time (1259 train-hours) occurred in 1988. Since 1989, PT. Badak has been able to reduce the number of LNG process train trips and down time significantly. In 1994 the number of LNG process train trips and was 18 times and the longest LNG process train down time was 377 train-hours. This plant reliability improvement was achieved by implementing plant reliability improvement programs beginning with the design of the new facilities and continuing with the maintenance and modification of the existing facilities. To improve reliability of the existing facilities, PT. Badak has been implementing comprehensive maintenance programs, to reduce the frequency and down time of the plant, such as Preventive and Predictive Maintenance as well as procurement material improvement since PT. Badak location is in a remote area. By implementing the comprehensive reliability maintenance, PT. Badak has been able to reduce the LNG process train trips to 18 and down time to 337 train hours in 1994 with the subsequent maintenance cost reduction. The average PT. Badak plant availability from 1985 to 1995 is 94.59%. New facilities were designed according to the established PT. Badak design philosophy, master plan and specification. Design of new facilities was modified to avoid certain problems from past experience. (au)
Madsen, Henrik; Burtschy, Bernard; Albeanu, G.
This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....
Full Text Available Power distribution systems are basic parts of power systems and reliability of these systems at present is a key issue for power engineering development and requires special attention. Operation of distribution systems is accompanied by a number of factors that produce random data a large number of unplanned interruptions. Research has shown that the predominant factors that have a significant influence on the reliability of distribution systems are: weather conditions (39.7%, defects in equipment(25% and unknown random factors (20.1%. In the article is studied the influence of random behavior and are presented estimations of reliability of predominantly rural electrical distribution systems.
Bickerton, George E.
Although there were not reasons to deplore against major activity release from any of the 110 industrial reactors authorized to operate in US, the nuclear incident that occurred at the Three Mile Island Plant in 1979 urged the public conscience toward the necessity of readiness to cope with events of this type. The personnel of the Emergency Planning Office functioning in the frame of US Department of Agriculture has already participated in around 600 intervention drillings on a federal, local or state scale to plan, test or asses radiological emergency plans or to intervene locally. These exercises allowed acquiring a significant experience in elaborating emergency plans, planning the drillings, working out scenarios and evaluation of the potential impact of accidents from the agricultural point of view. We have also taken part in different international drillings among which the most recent are INEX 1 and RADEX 94. We have found on these occasions that the agricultural problems are essential preoccupations in most of the cases no matter if the context is international, national, local or of state level. The paper poses problems specifically related to milk, fruits and vegetables, soils, meat and meat products. Finally the paper discusses issues like drilling planning, alarm and notification, sampling strategy, access authorizations for farmers, removing of contamination wastes. A number of social, political and economical relating problems are also mentioned
Every year, RTE produces a reliability report for the past year. This document lays out the main factors that affected the electrical power system's operational reliability in 2016 and the initiatives currently under way intended to ensure its reliability in the future. Within a context of the energy transition, changes to the European interconnected network mean that RTE has to adapt on an on-going basis. These changes include the increase in the share of renewables injecting an intermittent power supply into networks, resulting in a need for flexibility, and a diversification in the numbers of stakeholders operating in the energy sector and changes in the ways in which they behave. These changes are dramatically changing the structure of the power system of tomorrow and the way in which it will operate - particularly the way in which voltage and frequency are controlled, as well as the distribution of flows, the power system's stability, the level of reserves needed to ensure supply-demand balance, network studies, assets' operating and control rules, the tools used and the expertise of operators. The results obtained in 2016 are evidence of a globally satisfactory level of reliability for RTE's operations in somewhat demanding circumstances: more complex supply-demand balance management, cross-border schedules at interconnections indicating operation that is closer to its limits and - most noteworthy - having to manage a cold spell just as several nuclear power plants had been shut down. In a drive to keep pace with the changes expected to occur in these circumstances, RTE implemented numerous initiatives to ensure high levels of reliability: - maintaining investment levels of euro 1.5 billion per year; - increasing cross-zonal capacity at borders with our neighbouring countries, thus bolstering the security of our electricity supply; - implementing new mechanisms (demand response, capacity mechanism, interruptibility, etc.); - involvement in tests or projects
Gen, Mitsuo; Yun, Young Su
In the broadest sense, reliability is a measure of performance of systems. As systems have grown more complex, the consequences of their unreliable behavior have become severe in terms of cost, effort, lives, etc., and the interest in assessing system reliability and the need for improving the reliability of products and systems have become very important. Most solution methods for reliability optimization assume that systems have redundancy components in series and/or parallel systems and alternative designs are available. Reliability optimization problems concentrate on optimal allocation of redundancy components and optimal selection of alternative designs to meet system requirement. In the past two decades, numerous reliability optimization techniques have been proposed. Generally, these techniques can be classified as linear programming, dynamic programming, integer programming, geometric programming, heuristic method, Lagrangean multiplier method and so on. A Genetic Algorithm (GA), as a soft computing approach, is a powerful tool for solving various reliability optimization problems. In this paper, we briefly survey GA-based approach for various reliability optimization problems, such as reliability optimization of redundant system, reliability optimization with alternative design, reliability optimization with time-dependent reliability, reliability optimization with interval coefficients, bicriteria reliability optimization, and reliability optimization with fuzzy goals. We also introduce the hybrid approaches for combining GA with fuzzy logic, neural network and other conventional search techniques. Finally, we have some experiments with an example of various reliability optimization problems using hybrid GA approach
Niemann, Hans Henrik; Stoustrup, J.
Different aspects of modeling faults in dynamic systems are considered in connection with reliable control (RC). The fault models include models with additive faults, multiplicative faults and structural changes in the models due to faults in the systems. These descriptions are considered...... in connection with reliable control and feedback control with fault rejection. The main emphasis is on fault modeling. A number of fault diagnosis problems, reliable control problems, and feedback control with fault rejection problems are formulated/considered, again, mainly from a fault modeling point of view....... Reliability is introduced by means of the (primary) Youla parameterization of all stabilizing controllers, where an additional loop is closed around a diagnostic signal. In order to quantify the level of reliability, the dual Youla parameterization is introduced which can be used to analyze how large faults...
Mahadevan, Sankaran; Han, Song
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Yastrebenetsky, M.A.; Goldrin, V.M.; Garagulya, A.V.
The elaboration of WWER monitoring systems reliability measures is described in this paper. The evaluation is based on the statistical data about failures what have collected at the Ukrainian operating nuclear power plants (NPP). The main attention is devoted to radiation safety monitoring system and unit information computer system, what collects information from different sensors and system of the unit. Reliability measures were used for decision the problems, connected with life extension of the instruments, and for other purposes. (author). 6 refs, 6 figs
Results from the theory of computational complexity are applied to reliability computations on fault trees and networks. A well known class of problems which almost certainly have no fast solution algorithms is presented. It is shown that even approximately computing the reliability of many systems is difficult enough to be in this class. In the face of this result, which indicates that for general systems the computation time will be exponential in the size of the system, decomposition techniques which can greatly reduce the effective size of a wide variety of realistic systems are explored
Full Text Available Internet is used by many patients to obtain relevant medical information. We assessed the impact of "Google" search on the knowledge of the parents whose ward suffered from squint. In 21 consecutive patients, the "Google" search improved the mean score of the correct answers from 47% to 62%. We found that "Google" search was useful and reliable source of information for the patients with regards to the disease etiopathogenesis and the problems caused by the disease. The internet-based information, however, was incomplete and not reliable with regards to the disease treatment.
Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Siemaszko, A.
Interactive design/optimization of large, complex structural systems is considered. The objective function is assumed to model the expected costs. The constraints are reliability-based and/or related to deterministic code requirements. Solution of this optimization problem is divided in four main...... tasks, namely finite element analyses, sensitivity analyses, reliability analyses and application of an optimization algorithm. In the paper it is shown how these four tasks can be linked effectively and how existing information on design variables, Lagrange multipliers and the Hessian matrix can...
Yastrebenetsky, M A; Goldrin, V M; Garagulya, A V [Ukrainian State Scientific Technical Center of Nuclear and Radiation Safety, Kharkov (Ukraine). Instrumentation and Control Systems Dept.
The elaboration of WWER monitoring systems reliability measures is described in this paper. The evaluation is based on the statistical data about failures what have collected at the Ukrainian operating nuclear power plants (NPP). The main attention is devoted to radiation safety monitoring system and unit information computer system, what collects information from different sensors and system of the unit. Reliability measures were used for decision the problems, connected with life extension of the instruments, and for other purposes. (author). 6 refs, 6 figs.
Pescatore, C.; Sastre, C.
Proof of future performance of a complex system such as a high-level nuclear waste package over a period of hundreds to thousands of years cannot be had in the ordinary sense of the word. The general method of probabilistic reliability analysis could provide an acceptable framework to identify, organize, and convey the information necessary to satisfy the criterion of reasonable assurance of waste package performance according to the regulatory requirements set forth in 10 CFR 60. General principles which may be used to evaluate the qualitative and quantitative reliability of a waste package design are indicated and illustrated with a sample calculation of a repository concept in basalt. 8 references, 1 table
Hardy, L; Duru, Ph; Koch, J M; Revol, J L; Van Vaerenbergh, P; Volpe, A M; Clugnet, K; Dely, A; Goodhew, D
About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop.
Landers, John; Rogers, Erin; Gerke, Gretchen
A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.
Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon
of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental......The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...
Hardy, L.; Duru, Ph.; Koch, J.M.; Revol, J.L.; Van Vaerenbergh, P.; Volpe, A.M.; Clugnet, K.; Dely, A.; Goodhew, D.
About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop
This report contains the papers delivered at the course on safety and reliability assessment held at the CSIR Conference Centre, Scientia, Pretoria. The following topics were discussed: safety standards; licensing; biological effects of radiation; what is a PWR; safety principles in the design of a nuclear reactor; radio-release analysis; quality assurance; the staffing, organisation and training for a nuclear power plant project; event trees, fault trees and probability; Automatic Protective Systems; sources of failure-rate data; interpretation of failure data; synthesis and reliability; quantification of human error in man-machine systems; dispersion of noxious substances through the atmosphere; criticality aspects of enrichment and recovery plants; and risk and hazard analysis. Extensive examples are given as well as case studies
Ferrari, Vera; Bradley, Margaret M.; Codispoti, Maurizio; Lang, Peter J.
Studies of cognition often use an “oddball” paradigm to study effects of stimulus novelty and significance on information processing. However, an oddball tends to be perceptually more novel than the standard, repeated stimulus as well as more relevant to the ongoing task, making it difficult to disentangle effects due to perceptual novelty and stimulus significance. In the current study, effects of perceptual novelty and significance on ERPs were assessed in a passive viewing context by presenting repeated and novel pictures (natural scenes) that either signaled significant information regarding the current context or not. A fronto-central N2 component was primarily affected by perceptual novelty, whereas a centro-parietal P3 component was modulated by both stimulus significance and novelty. The data support an interpretation that the N2 reflects perceptual fluency and is attenuated when a current stimulus matches an active memory representation and that the amplitude of the P3 reflects stimulus meaning and significance. PMID:19400680
Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen
A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the
In this article the restructuring process under way in the US power industry is being revisited from the point of view of transmission system provision and reliability was rolled into the average cost of electricity to all, it is not so obvious how is this cost managed in the new industry. A new MIT approach to transmission pricing is here suggested as a possible solution [it
Nuclear Regulatory Commission — This dataset provides a list of Nuclear Regulartory Commission (NRC) issued significant enforcement actions. These actions, referred to as "escalated", are issued by...
Grønbæk, Lars Jesper; Schwefel, Hans-Peter; Kjærgaard, Jens Kristian
, representative diagnosis performance metrics have been defined and their closed-form solutions obtained for the Markov model. These equations enable model parameterization from traces of implemented diagnosis components. The diagnosis model has been integrated in a reliability model assessing the impact...... of the diagnosis functions for the studied reliability problem. In a simulation study we finally analyze trade-off properties of diagnosis heuristics from literature, map them to the analytic Markov model, and investigate its suitability for service reliability optimization....
Hoppa, Mary Ann; Wilson, Larry W.
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.
...--Staffing). \\2\\ Mandatory Reliability Standards for the Bulk-Power System, Order No. 693, 72 FR 16416 (Apr... on the North American bulk electric system are competent to perform those reliability-related tasks... PER-004-2 will achieve a significant improvement in the reliability of the Bulk- Power System and...
Murthy, D.N.P.; Rausand, M.; Virtanen, S.
Product reliability is of great importance to both manufacturers and customers. Building reliability into a new product is costly, but the consequences of inadequate product reliability can be costlier. This implies that manufacturers need to decide on the optimal investment in new product reliability by achieving a suitable trade-off between the two costs. This paper develops a framework and proposes an approach to help manufacturers decide on the investment in new product reliability.
... Staying Safe Videos for Educators Search English Español Speech Problems KidsHealth / For Teens / Speech Problems What's in ... a person's ability to speak clearly. Some Common Speech and Language Disorders Stuttering is a problem that ...
Full Text Available Duty-cycled sensor networks provide a new perspective for improvement of energy efficiency and reliability assurance of multi-hop cooperative sensor networks. In this paper, we consider the energy-efficient cooperative node sleeping and clustering problems in cooperative sensor networks where clusters of relay nodes jointly transmit sensory data to the next hop. Our key idea for guaranteeing reliability is to exploit the on-demand number of cooperative nodes, facilitating the prediction of personalized end-to-end (ETE reliability. Namely, a novel reliability-aware cooperative routing (RCR scheme is proposed to select k-cooperative nodes at every hop (RCR-selection. After selecting k cooperative nodes at every hop, all of the non-cooperative nodes will go into sleep status. In order to solve the cooperative node clustering problem, we propose the RCR-based optimal relay assignment and cooperative data delivery (RCR-delivery scheme to provide a low-communication-overhead data transmission and an optimal duty cycle for a given number of cooperative nodes when the network is dynamic, which enables part of cooperative nodes to switch into idle status for further energy saving. Through the extensive OPNET-based simulations, we show that the proposed scheme significantly outperforms the existing geographic routing schemes and beaconless geographic routings in wireless sensor networks with a highly dynamic wireless channel and controls energy consumption, while ETE reliability is effectively guaranteed.
The present thesis consists of four papers, three of which are of a expositary nature and one more theoretical. The first two papers have a natural coupling to the man-machine interface. The first paper is devoted to the concept of maintainability and the role of man as maintenance technician. The second paper discusses aspects of human reliability, mainly studying man as operator. However, maintenance tasks can be studied in the same manner. The third paper concerns reliability prediction for mechanical components. This is an area of vital importance for the reliability practitioner, who needs realistic and easy-to-use mathematical models for different failure modes. The fourth paper discusses mathematical models for repairable systems, especially the problem of testing whether a constant event intensity model is adequate or not. (author)
Muhammad Aslam Noor
Full Text Available We consider a new class of equilibrium problems, known as hemiequilibrium problems. Using the auxiliary principle technique, we suggest and analyze a class of iterative algorithms for solving hemiequilibrium problems, the convergence of which requires either pseudomonotonicity or partially relaxed strong monotonicity. As a special case, we obtain a new method for hemivariational inequalities. Since hemiequilibrium problems include hemivariational inequalities and equilibrium problems as special cases, the results proved in this paper still hold for these problems.
Nualnong Wongtongkam; Paul Russell Ward; Andrew Day; Anthony Harold Winefield
In Thailand physical violence among male adolescents is considered a significant public health issue, although there has been little published research into the aetiology and functions of violence in Thai youth. Research in this area has been hampered by a lack of psychometrically sound tools that have been validated to assess problem behaviours in Asian youth. The purpose of this paper is to provide validity and reliability data on an instrument to measure violence in Thai youth. In this stu...
Enevoldsen, I.; Sørensen, John Dalsgaard
Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...
Iachina, Maria; Bilenberg, Niels
data from a national database in which HoNOSCA is scored before and after therapy in order to show the treatment effect. We constructed a modified score to correct for the potential bias due to RTM, and used Generalized Linear Models analysis to adjust for the ceiling and floor effect. Our study showed...
52242, USA firstname.lastname@example.org Mary Kathryn Cowles Department of Statistics & Actuarial Science College of Liberal Arts and Sciences , The...Forrester, A. I. J., & Keane, A. J. (2009). Recent advances in surrogate-based optimization. Progress in Aerospace Sciences , 45(1–3), 50-79. doi...Wiley.  Sacks, J., Welch, W. J., Toby J. Mitchell, & Wynn, H. P. (1989). Design and analysis of computer experiments. Statistical Science , 4
Nayak, Arun Kumar, E-mail: email@example.com [Reactor Engineering Division, Reactor Design and Development Group, Bhabha Atomic Research Centre, Mumbai (India); Chandrakar, Amit [Homi Bhabha National Institute, Mumbai (India); Vinod, Gopika [Reactor Safety Division, Reactor Design and Development Group, Bhabha Atomic Research Centre, Mumbai (India)
Reliability assessment of passive safety systems is one of the important issues, since safety of advanced nuclear reactors rely on several passive features. In this context, a few methodologies such as reliability evaluation of passive safety system (REPAS), reliability methods for passive safety functions (RMPS), and analysis of passive systems reliability (APSRA) have been developed in the past. These methodologies have been used to assess reliability of various passive safety systems. While these methodologies have certain features in common, but they differ in considering certain issues; for example, treatment of model uncertainties, deviation of geometric, and process parameters from their nominal values. This paper presents the state of the art on passive system reliability assessment methodologies, the accomplishments, and remaining issues. In this review, three critical issues pertaining to passive systems performance and reliability have been identified. The first issue is applicability of best estimate codes and model uncertainty. The best estimate codes based phenomenological simulations of natural convection passive systems could have significant amount of uncertainties, these uncertainties must be incorporated in appropriate manner in the performance and reliability analysis of such systems. The second issue is the treatment of dynamic failure characteristics of components of passive systems. REPAS, RMPS, and APSRA methodologies do not consider dynamic failures of components or process, which may have strong influence on the failure of passive systems. The influence of dynamic failure characteristics of components on system failure probability is presented with the help of a dynamic reliability methodology based on Monte Carlo simulation. The analysis of a benchmark problem of Hold-up tank shows the error in failure probability estimation by not considering the dynamism of components. It is thus suggested that dynamic reliability methodologies must be
Madsen, Tobias; Hobolth, Asger; Jensen, Jens Ledet
in genomics and the multiple-testing issues accompanying them, accurate significance evaluation is of great importance. We here address the problem of evaluating statistical significance of observations from factor graph models. Results Two novel numerical approximations for evaluation of statistical...... significance are presented. First a method using importance sampling. Second a saddlepoint approximation based method. We develop algorithms to efficiently compute the approximations and compare them to naive sampling and the normal approximation. The individual merits of the methods are analysed both from....... Conclusions The applicability of saddlepoint approximation and importance sampling is demonstrated on known models in the factor graph framework. Using the two methods we can substantially improve computational cost without compromising accuracy. This contribution allows analyses of large datasets...
Carlson, D.D.; Murphy, J.A.
The Interim Reliability Evaluation Program (IREP), sponsored by the Office of Nuclear Regulatory Research of the US Nuclear Regulatory Commission, is currently applying probabilistic risk analysis techniques to two PWR and two BWR type power plants. Emphasis was placed on the systems analysis portion of the risk assessment, as opposed to accident phenomenology or consequence analysis, since the identification of risk significant plant features was of primary interest. Traditional event tree/fault tree modeling was used for the analysis. However, the study involved a more thorough investigation of transient initiators and of support system faults than studies in the past and substantially improved techniques were used to quantify accident sequence frequencies. This study also attempted to quantify the potential for operator recovery actions in the course of each significant accident
Stieglitz, Lennart Henning
Neuronavigation plays a central role in modern neurosurgery. It allows visualizing instruments and three-dimensional image data intraoperatively and supports spatial orientation. Thus it allows to reduce surgical risks and speed up complex surgical procedures. The growing availability and importance of neuronavigation makes clear how relevant it is to know about its reliability and accuracy. Different factors may influence the accuracy during the surgery unnoticed, misleading the surgeon. Besides the best possible optimization of the systems themselves, a good knowledge about its weaknesses is mandatory for every neurosurgeon.
Fosgerau, Mogens; Karlström, Anders
We derive the value of reliability in the scheduling of an activity of random duration, such as travel under congested conditions. Using a simple formulation of scheduling utility, we show that the maximal expected utility is linear in the mean and standard deviation of trip duration, regardless...... of the form of the standardised distribution of trip durations. This insight provides a unification of the scheduling model and models that include the standard deviation of trip duration directly as an argument in the cost or utility function. The results generalise approximately to the case where the mean...
Two major limitations occur in present structural design code developments utilizing reliability theory. The notional system reliabilities may differ significantly from calibrated component reliabilities. Secondly, actual failures are often due to gross errors not reflected in most present code formats. A review is presented of system reliability methods and further new concepts are developed. The incremental load approach for identifying and expressing collapse modes is expanded by employing a strategy to identify and enumerate the significant structural collapse modes. It further isolates the importance of critical components in the system performance. Ductile and brittle component behavior and strength correlation is reflected in the system model and illustrated in several examples. Modal combinations for the system reliability are also reviewed. From these developments a system factor can be addended to component safety checking equations. Values may be derived from system behavior by substituting in a damage model which accounts for the response range from component failure to collapse. Other strategies are discussed which emphasize quality assurance during design and in-service inspection for components whose behavior is critical to the system reliability. (Auth.)
Caruso, J C
The unreliability of difference scores is a well documented phenomenon in the social sciences and has led researchers and practitioners to interpret differences cautiously, if at all. In the case of the Kaufman Adult and Adolescent Intelligence Test (KAIT), the unreliability of the difference between the Fluid IQ and the Crystallized IQ is due to the high correlation between the two scales. The consequences of the lack of precision with which differences are identified are wide confidence intervals and unpowerful significance tests (i.e., large differences are required to be declared statistically significant). Reliable component analysis (RCA) was performed on the subtests of the KAIT in order to address these problems. RCA is a new data reduction technique that results in uncorrelated component scores with maximum proportions of reliable variance. Results indicate that the scores defined by RCA have discriminant and convergent validity (with respect to the equally weighted scores) and that differences between the scores, derived from a single testing session, were more reliable than differences derived from equal weighting for each age group (11-14 years, 15-34 years, 35-85+ years). This reliability advantage results in narrower confidence intervals around difference scores and smaller differences required for statistical significance.
Johnson, William; Cameron, Noël; Dickson, Peter; Emsley, Stuart; Raynor, Pauline; Seymour, Claire; Wright, John
Reliable data on child growth is a prerequisite for monitoring and improving child health. Despite the extensive resources invested in recording anthropometry, there has been little research into the reliability of these data. If these measurements are unreliable growth may be misreported, and health problems may go undetected. To assess the reliability of routine infant growth data, following anthropometric training of health workers responsible for collecting these data, in Bradford, UK. To determine whether being observed by an external administrator influenced reliability. A test-retest design was used. All health workers (n=192) responsible for growth monitoring in Bradford were included in the study, of which 36.5% (n=70) had complete data. Following training in basic anthropometry all health workers were asked to complete a test-retest study, using infants aged 0-2 years. Health workers took two recordings of weight, length, head circumference, and abdominal circumferences on five infants. A peer health worker recorded a third set of measurements on each infant. Twenty-two individuals were selected to be observed by an external administrator during data collection. Technical error of measurements (TEMs) were produced to assess intra-observer and inter-observer reliability. Differences between groups were tested to determine whether external observation influences reliability. None of the TEMs were excessively large, and coefficients of reliability ranged from 0.96 to 1.00. All intra-observer and inter-observer TEMs for the observed group were larger than those for the non-observed group. For example, the observed group's intra-observer TEMs for weight, length, abdominal circumference, and head circumference (46.18 g, 0.60 cm, 0.65 cm, 0.47 cm) were larger than the non-observed group's TEMS (9.14 g, 0.35 cm, 0.34 cm, 0.19 cm). TEMs for weight, abdominal circumference, and head circumference were significantly larger for the observed group, compared to the non
This paper presents a method to study human reliability in decision situations related to nuclear power plant disturbances. Decisions often play a significant role in handling of emergency situations. The method may be applied to probabilistic safety assessments (PSAs) in cases where decision making is an important dimension of an accident sequence. Such situations are frequent e.g. in accident management. In this paper, a modelling approach for decision reliability studies is first proposed. Then, a case study with two decision situations with relatively different characteristics is presented. Qualitative and quantitative findings of the study are discussed. In very simple decision cases with time pressure, time reliability correlation proved out to be a feasible reliability modelling method. In all other decision situations, more advanced probabilistic decision models have to be used. Finally, decision probability assessment by using simulator run results and expert judgement is presented
Paula Regina Ventura Amorim Gonçalez
Full Text Available Introduction: Considering the guidelines of ISO 16363: 2012 (Space data and information transfer systems -- Audit and certification of trustworthy digital repositories and the text of CONARQ Resolution 39 for certification of Reliable Digital Archival Repository (RDC-Arq, verify the technical recommendations should be used as the basis for a digital archival repository to be considered reliable. Objective: Identify requirements for the creation of Reliable Digital Archival Repositories with emphasis on access to information from the ISO 16363: 2012 and CONARQ Resolution 39. Methodology: For the development of the study, the methodology consisted of an exploratory, descriptive and documentary theoretical investigation, since it is based on ISO 16363: 2012 and CONARQ Resolution 39. From the perspective of the problem approach, the study is qualitative and quantitative, since the data were collected, tabulated, and analyzed from the interpretation of their contents. Results: We presented a set of Checklist Recommendations for reliability measurement and/or certification for RDC-Arq with a clipping focused on the identification of requirements with emphasis on access to information is presented. Conclusions: The right to information as well as access to reliable information is a premise for Digital Archival Repositories, so the set of recommendations is directed to archivists who work in Digital Repositories and wish to verify the requirements necessary to evaluate the reliability of the Digital Repository or still guide the information professional in collecting requirements for repository reliability certification.
Khaykovich, I.M.; Savosin, S.I.
The reliability estimation system accepted in the Soviet Union for sampling data in nuclear geophysics is based on unique requirements in metrology and methodology. It involves estimating characteristic errors in calibration, as well as errors in measurement and interpretation. This paper describes the methods of estimating the levels of systematic and random errors at each stage of the problem. The data of nuclear geophysics sampling are considered to be reliable if there are no statistically significant, systematic differences between ore intervals determined by this method and by geological control, or by other methods of sampling; the reliability of the latter having been verified. The difference between the random errors is statistically insignificant. The system allows one to obtain information on the parameters of ore intervals with a guaranteed random error and without systematic errors. (Author)
Full Text Available This paper investigates the structural design optimization to cover both the reliability and robustness under uncertainty in design variables. The main objective is to improve the efficiency of the optimization process. To address this problem, a hybrid reliability-based robust design optimization (RRDO method is proposed. Prior to the design optimization, the Sobol sensitivity analysis is used for selecting key design variables and providing response variance as well, resulting in significantly reduced computational complexity. The single-loop algorithm is employed to guarantee the structural reliability, allowing fast optimization process. In the case of robust design, the weighting factor balances the response performance and variance with respect to the uncertainty in design variables. The main contribution of this paper is that the proposed method applies the RRDO strategy with the usage of global approximation and the Sobol sensitivity analysis, leading to the reduced computational cost. A structural example is given to illustrate the performance of the proposed method.
Sanders, James L; Williams, Robert J
Most tests of video game addiction have weak construct validity and limited ability to correctly identify people in denial. The purpose of the present research was to investigate the reliability and validity of a new test of video game addiction (Behavioral Addiction Measure-Video Gaming [BAM-VG]) that was developed in part to address these deficiencies. Regular adult video gamers (n = 506) were recruited from a Canadian online panel and completed a survey containing three measures of excessive video gaming (BAM-VG; DSM-5 criteria for Internet Gaming Disorder [IGD]; and the IGD-20), as well as questions concerning extensiveness of video game involvement and self-report of problems associated with video gaming. One month later, they were reassessed for the purposes of establishing test-retest reliability. The BAM-VG demonstrated good internal consistency as well as 1 month test-retest reliability. Criterion-related validity was demonstrated by significant correlations with the following: time spent playing, self-identification of video game problems, and scores on other instruments designed to assess video game addiction (DSM-5 IGD, IGD-20). Consistent with the theory, principal component analysis identified two components underlying the BAM-VG that roughly correspond with impaired control and significant negative consequences deriving from this impaired control. Together with its excellent construct validity and other technical features, the BAM-VG represents a reliable and valid test of video game addiction.
Johnson, Stephen B.
The technical and organizational problems posed by the Space Exploration Initiative (SEI) are discussed, and some possible solutions are examined. It is pointed out that SEI poses a whole new set of challenging problems in the design of reliable systems. These missions and their corresponding systems are far more complex than current systems. The initiative requires a set of vehicles and systems which must have very high levels of autonomy, reliability, and operability for long periods of time. It is emphasized that to achieve these goals in the face of great complexity, new technologies and organizational techniques will be necessary. It is noted that the key to a good design is good people. Not only must good people be found, but they must be placed in positions appropriate to their skills. It is argued that the atomistic and autocratic paradigm of vertical organizations must be replaced with more team-oriented and democratic structures.
Sanz, Joaquín; Cozzo, Emanuele; Moreno, Yamir
The availability of data from many different sources and fields of science has made it possible to map out an increasing number of networks of contacts and interactions. However, quantifying how reliable these data are remains an open problem. From Biology to Sociology and Economics, the identification of false and missing positives has become a problem that calls for a solution. In this work we extend one of the newest, best performing models—due to Guimerá and Sales-Pardo in 2009—to directed networks. The new methodology is able to identify missing and spurious directed interactions with more precision than previous approaches, which renders it particularly useful for analyzing data reliability in systems like trophic webs, gene regulatory networks, communication patterns and several social systems. We also show, using real-world networks, how the method can be employed to help search for new interactions in an efficient way.
Sanz, Joaquín; Cozzo, Emanuele; Moreno, Yamir
The availability of data from many different sources and fields of science has made it possible to map out an increasing number of networks of contacts and interactions. However, quantifying how reliable these data are remains an open problem. From Biology to Sociology and Economics, the identification of false and missing positives has become a problem that calls for a solution. In this work we extend one of the newest, best performing models—due to Guimerá and Sales-Pardo in 2009—to directed networks. The new methodology is able to identify missing and spurious directed interactions with more precision than previous approaches, which renders it particularly useful for analyzing data reliability in systems like trophic webs, gene regulatory networks, communication patterns and several social systems. We also show, using real-world networks, how the method can be employed to help search for new interactions in an efficient way. (paper)
Allan, R.N.; Whitehead, A.M.
The logical structure, techniques and practical application of a computer-aided technique based on a microcomputer using floppy disc Random Access Files is described. This interactive computational technique is efficient if the reliability prediction program is coupled directly to a relevant source of data to create an integrated reliability assessment/reliability data bank system. (DG)
Trudnowski, Daniel [Montana Tech of the Univ. of Montana, Butte, MT (United States)
This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”
Wright, R I
Microprocessor-based technology has had an impact in nearly every area of industrial electronics and many applications have important safety implications. Microprocessors are being used for the monitoring and control of hazardous processes in the chemical, oil and power generation industries, for the control and instrumentation of aircraft and other transport systems and for the control of industrial machinery. Even in the field of nuclear reactor protection, where designers are particularly conservative, microprocessors are used to implement certain safety functions and may play increasingly important roles in protection systems in the future. Where microprocessors are simply replacing conventional hard-wired control and instrumentation systems no new hazards are created by their use. In the field of robotics, however, the microprocessor has opened up a totally new technology and with it has created possible new and as yet unknown hazards. The paper discusses some of the design and manufacturing techniques which may be used to enhance the reliability of microprocessor based systems and examines the available reliability data on lsi/vlsi microcircuits. 12 references.
Nielsen, Søren R.K.; Sørensen, John Dalsgaard; Thoft-Christensen, Palle
plastic deformation during several loadings can be modelled as a filtered Poisson process. Using the Markov property of this quantity the considered first-passage problem as well as the related extreme distribution problems are then solved numerically, and the results are compared to simulation studies.......A method is presented for life-time reliability' estimates of randomly excited yielding systems, assuming the structure to be safe, when the plastic deformations are confined below certain limits. The accumulated plastic deformations during any single significant loading history are considered...
Galperin, E. M.; Zayko, V. A.; Gorshkalev, P. A.
Production and manufacture play an important role in today’s modern society. Industrial production is nowadays characterized by increased and complex communications between its parts. The problem of preventing accidents in a large industrial enterprise becomes especially relevant. In these circumstances, the reliability of enterprise functioning is of particular importance. Potential damage caused by an accident at such enterprise may lead to substantial material losses and, in some cases, can even cause a loss of human lives. That is why industrial enterprise functioning reliability is immensely important. In terms of their reliability, industrial facilities (objects) are divided into simple and complex. Simple objects are characterized by only two conditions: operable and non-operable. A complex object exists in more than two conditions. The main characteristic here is the stability of its operation. This paper develops the reliability indicator combining the set theory methodology and a state space method. Both are widely used to analyze dynamically developing probability processes. The research also introduces a set of reliability indicators for complex technical systems.
Selcuk, A. Sevtap; Yuecemen, M. Semih
Lifelines, such as pipelines, transportation, communication and power transmission systems, are networks which extend spatially over large geographical regions. The quantification of the reliability (survival probability) of a lifeline under seismic threat requires attention, as the proper functioning of these systems during or after a destructive earthquake is vital. In this study, a lifeline is idealized as an equivalent network with the capacity of its elements being random and spatially correlated and a comprehensive probabilistic model for the assessment of the reliability of lifelines under earthquake loads is developed. The seismic hazard that the network is exposed to is described by a probability distribution derived by using the past earthquake occurrence data. The seismic hazard analysis is based on the 'classical' seismic hazard analysis model with some modifications. An efficient algorithm developed by Yoo and Deo (Yoo YB, Deo N. A comparison of algorithms for terminal pair reliability. IEEE Transactions on Reliability 1988; 37: 210-215) is utilized for the evaluation of the network reliability. This algorithm eliminates the CPU time and memory capacity problems for large networks. A comprehensive computer program, called LIFEPACK is coded in Fortran language in order to carry out the numerical computations. Two detailed case studies are presented to show the implementation of the proposed model
Full Text Available With CMOS technology aggressively scaling towards the 22-nm node, modern FPGA devices face tremendous aging-induced reliability challenges due to bias temperature instability (BTI and hot carrier injection (HCI. This paper presents a novel anti-aging technique at the logic level that is both scalable and applicable for VLSI digital circuits implemented with FPGA devices. The key idea is to prolong the lifetime of FPGA-mapped designs by strategically elevating the VDD values of some LUTs based on their modular criticality values. Although the idea of scaling VDD in order to improve either energy efficiency or circuit reliability has been explored extensively, our study distinguishes itself by approaching this challenge through an analytical procedure, therefore being able to maximize the overall reliability of the target FPGA design by rigorously modeling the BTI-induced device reliability and optimally solving the VDD assignment problem. Specifically, we first develop a systematic framework to analytically model the reliability of an FPGA LUT (look-up table, which consists of both RAM memory bits and associated switching circuit. We also, for the first time, establish the relationship between signal transition density and a LUT’s reliability in an analytical way. This key observation further motivates us to define the modular criticality as the product of signal transition density and the logic observability of each LUT. Finally, we analytically prove, for the first time, that the optimal way to improve the overall reliability of a whole FPGA device is to fortify individual LUTs according to their modular criticality. To the best of our knowledge, this work is the first to draw such a conclusion.
Kozine, Igor; Krymsky, Victor
The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... rate is known possibly along with other reliability measures, precise or imprecise. The Lagrange method is used to solve the constrained optimization problem to derive new reliability measures of interest. The obtained results call for an exponential-wise approximation of failure probability density...
With the development of control and information technology at NPPs, software reliability is important because software failure is usually considered as one form of common cause failures in Digital I and C Systems (DCS). The reliability analysis of DCS, particularly qualitative and quantitative evaluation on the nuclear safety-critical software reliability belongs to a great challenge. To solve this problem, not only comprehensive evaluation model and stage evaluation models are built in this paper, but also prediction and sensibility analysis are given to the models. It can make besement for evaluating the reliability and safety of DCS. (author)
The paper presents a critical review of the state-of-the-art in probabilistic assessment of pressure vessel and piping reliability. First the differences in assessing the reliability directly from historical failure data and indirectly by a probabilistic analysis of the failure phenomenon are discussed and the advantages and disadvantages are pointed out. The rest of the paper deals with the latter approach of reliability assessment. Methods of probabilistic reliability assessment are described and major projects where these methods are applied for pressure vessel and piping problems are discussed. An extensive list of references is provided at the end of the paper
Cross, P.M.; Taplin, R.C.
The reliability problems associated with modernizing control systems in nuclear power plants, particularly by using new technology microcircuits, are discussed and twelve problem areas identified. These are: new technology introduction; variability in manufacture; derating necessities; distributed systems; use of redundancy; electrostatic discharge damage; electromagnetic interference; nuclear radiation; thermal effects; contamination, including humidity; mechanical effects, including vibration; and testing. Recommendations for the AECB are given in each area. Guidelines are given for the design, procurement, installation, operation and maintenance stages of use. Recommendations for further work are given
Whether global warming, terrestrial carbon sinks, ecosystem functioning, genetically modified organisms, cloning, vaccination or chemicals in the environment, science is increasingly the battlefield on which political advocates, not least lawyers and commercial interests, manipulate `facts' to their preferred direction, which fosters the politicization of science. Debate putatively over science increasingly relies on tactics such as ad hominem attacks and criticism of process (for example, peer review or sources of funding), through paid advertisements, press releases and other publicity campaigns. As political battles are waged through `science', many scientists are willing to adopt tactics of demagoguery and character assassination as well as, or even instead of, reasoned argument, as in aspects of debate over genetically modified crops or global warming. Science is becoming yet another playing field for power politics, complete with the trappings of media spin and a win-at-all-costs attitude. Sadly, much of what science can offer policymakers, and hence society, is lost. This talk will use cases from the atmospheric sciences as points of departure to explore the politicization of science from several perspectives and address questions such as: Is it a problem? For whom and what outcomes? What are the alternatives to business-as-usual?
This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.
The overarching goal of this research project is to enable state DOTs to document and monitor the reliability performance : of their highway networks. To this end, a computer tool, TRIC, was developed to produce travel reliability inventories from : ...
The mind/body problem is the feeling/function problem: How and why do feeling systems feel? The problem is not just "hard" but insoluble (unless one is ready to resort to telekinetic dualism). Fortunately, the "easy" problems of cognitive science (such as the how and why of categorization and language) are not insoluble. Five books (by Damasio, Edelman/Tononi...
Zhu, Jiandao; Collette, Matthew
The material and modeling parameters that drive structural reliability analysis for marine structures are subject to a significant uncertainty. This is especially true when time-dependent degradation mechanisms such as structural fatigue cracking are considered. Through inspection and monitoring, information such as crack location and size can be obtained to improve these parameters and the corresponding reliability estimates. Dynamic Bayesian Networks (DBNs) are a powerful and flexible tool to model dynamic system behavior and update reliability and uncertainty analysis with life cycle data for problems such as fatigue cracking. However, a central challenge in using DBNs is the need to discretize certain types of continuous random variables to perform network inference while still accurately tracking low-probability failure events. Most existing discretization methods focus on getting the overall shape of the distribution correct, with less emphasis on the tail region. Therefore, a novel scheme is presented specifically to estimate the likelihood of low-probability failure events. The scheme is an iterative algorithm which dynamically partitions the discretization intervals at each iteration. Through applications to two stochastic crack-growth example problems, the algorithm is shown to be robust and accurate. Comparisons are presented between the proposed approach and existing methods for the discretization problem. - Highlights: • A dynamic discretization method is developed for low-probability events in DBNs. • The method is compared to existing approaches on two crack growth problems. • The method is shown to improve on existing methods for low-probability events
Acharya, G.D.; Trivedi, S.A.R.; Pai, K.B.
The high performance of the time of flight diffraction technique (TOFD) with regard to the detection capabilities of weld defects such as crack, slag, lack of fusion has led to a rapidly increasing acceptance of the technique as a pre?service inspection tool. Since the early 1990s TOFD has been applied to several projects, where it replaced the commonly used radiographic testing. The use of TOM lead to major time savings during new build and replacement projects. At the same time the TOFD technique was used as base line inspection, which enables monitoring in the future for critical welds, but also provides documented evidence for life?time. The TOFD technique as the ability to detect and simultaneously size flows of nearly any orientation within the weld and heat affected zone. TOM is recognized as a reliable, proven technique for detection and sizing of defects and proven to be a time saver, resulting in shorter shutdown periods and construction project times. Thus even in cases where inspection price of TOFD per welds is higher, in the end it will result in significantly lower overall costs and improve quality. This paper deals with reliability, economy, acceptance criteria and field experience. It also covers comparative study between radiography technique Vs. TOFD. (Author)
Schwan, C.A.; Morgan, T.A.
Reliability centered maintenance (RCM) is an approach to preventive maintenance planning and evaluation that has been used successfully by other industries, most notably the airlines and military. Now EPRI is demonstrating RCM in the commercial nuclear power industry. Just completed are large-scale, two-year demonstrations at Rochester Gas ampersand Electric (Ginna Nuclear Power Station) and Southern California Edison (San Onofre Nuclear Generating Station). Both demonstrations were begun in the spring of 1988. At each plant, RCM was performed on 12 to 21 major systems. Both demonstrations determined that RCM is an appropriate means to optimize a PM program and improve nuclear plant preventive maintenance on a large scale. Such favorable results had been suggested by three earlier EPRI pilot studies at Florida Power ampersand Light, Duke Power, and Southern California Edison. EPRI selected the Ginna and San Onofre sites because, together, they represent a broad range of utility and plant size, plant organization, plant age, and histories of availability and reliability. Significant steps in each demonstration included: selecting and prioritizing plant systems for RCM evaluation; performing the RCM evaluation steps on selected systems; evaluating the RCM recommendations by a multi-disciplinary task force; implementing the RCM recommendations; establishing a system to track and verify the RCM benefits; and establishing procedures to update the RCM bases and recommendations with time (a living program). 7 refs., 1 tab
Dunbar, P. K.; Furtney, M.; McLean, S. J.; Sweeney, A. D.
Tsunamis have inflicted death and destruction on the coastlines of the world throughout history. The occurrence of tsunamis and the resulting effects have been collected and studied as far back as the second millennium B.C. The knowledge gained from cataloging and examining these events has led to significant changes in our understanding of tsunamis, tsunami sources, and methods to mitigate the effects of tsunamis. The most significant, not surprisingly, are often the most devastating, such as the 2011 Tohoku, Japan earthquake and tsunami. The goal of this poster is to give a brief overview of the occurrence of tsunamis and then focus specifically on several significant tsunamis. There are various criteria to determine the most significant tsunamis: the number of deaths, amount of damage, maximum runup height, had a major impact on tsunami science or policy, etc. As a result, descriptions will include some of the most costly (2011 Tohoku, Japan), the most deadly (2004 Sumatra, 1883 Krakatau), and the highest runup ever observed (1958 Lituya Bay, Alaska). The discovery of the Cascadia subduction zone as the source of the 1700 Japanese "Orphan" tsunami and a future tsunami threat to the U.S. northwest coast, contributed to the decision to form the U.S. National Tsunami Hazard Mitigation Program. The great Lisbon earthquake of 1755 marked the beginning of the modern era of seismology. Knowledge gained from the 1964 Alaska earthquake and tsunami helped confirm the theory of plate tectonics. The 1946 Alaska, 1952 Kuril Islands, 1960 Chile, 1964 Alaska, and the 2004 Banda Aceh, tsunamis all resulted in warning centers or systems being established.The data descriptions on this poster were extracted from NOAA's National Geophysical Data Center (NGDC) global historical tsunami database. Additional information about these tsunamis, as well as water level data can be found by accessing the NGDC website www.ngdc.noaa.gov/hazard/
Kurtz, Sarah [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
NREL's Photovoltaic (PV) Reliability Workshop (PVRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology -- both critical goals for moving PV technologies deeper into the electricity marketplace.
Wolfe, W.A.; Nieuwhof, G.W.E.
AECL's reliability and maintainability program for nuclear generating stations is described. How the various resources of the company are organized to design and construct stations that operate reliably and safely is shown. Reliability and maintainability includes not only special mathematically oriented techniques, but also the technical skills and organizational abilities of the company. (author)
Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.
In order to introduce the basic concepts within the field of reliability-based structural optimization problems, this chapter is devoted to a brief outline of the basic theories. Therefore, this chapter is of a more formal nature and used as a basis for the remaining parts of the thesis. In section...... 2.2 a general non-linear optimization problem and corresponding terminology are presented whereupon optimality conditions and the standard form of an iterative optimization algorithm are outlined. Subsequently, the special properties and characteristics concerning structural optimization problems...... are treated in section 2.3. With respect to the reliability evalutation, the basic theory behind a reliability analysis and estimation of probability of failure by the First-Order Reliability Method (FORM) and the iterative Rackwitz-Fiessler (RF) algorithm are considered in section 2.5 in which...
Doctor, S.R.; Deffenbaugh, J.D.; Good, M.S.; Green, E.R.; Heasler, P.G.; Hutton, P.H.; Reid, L.D.; Simonen, F.A.; Spanner, J.C.; Vo, T.V.
This paper reports on progress for three programs: (1) evaluation and improvement in nondestructive examination reliability for inservice inspection of light water reactors (LWR) (NDE Reliability Program), (2) field validation acceptance, and training for advanced NDE technology, and (3) evaluation of computer-based NDE techniques and regional support of inspection activities. The NDE Reliability Program objectives are to quantify the reliability of inservice inspection techniques for LWR primary system components through independent research and establish means for obtaining improvements in the reliability of inservice inspections. The areas of significant progress will be described concerning ASME Code activities, re-analysis of the PISC-II data, the equipment interaction matrix study, new inspection criteria, and PISC-III. The objectives of the second program are to develop field procedures for the AE and SAFT-UT techniques, perform field validation testing of these techniques, provide training in the techniques for NRC headquarters and regional staff, and work with the ASME Code for the use of these advanced technologies. The final program's objective is to evaluate the reliability and accuracy of interpretation of results from computer-based ultrasonic inservice inspection systems, and to develop guidelines for NRC staff to monitor and evaluate the effectiveness of inservice inspections conducted on nuclear power reactors. This program started in the last quarter of FY89, and the extent of the program was to prepare a work plan for presentation to and approval from a technical advisory group of NRC staff
... know the exact cause of your prostate problem. Prostatitis The cause of prostatitis depends on whether you ... prostate problem in men older than age 50. Prostatitis If you have a UTI, you may be ...
This article presents the general problems as natural disasters, consequences of global climate change, public health, the danger of criminal actions, the availability to information about problems of environment
... Staying Safe Videos for Educators Search English Español Learning Problems KidsHealth / For Kids / Learning Problems What's in ... for how to make it better. What Are Learning Disabilities? Learning disabilities aren't contagious, but they ...
... Read MoreDepression in Children and TeensRead MoreBMI Calculator Ankle ProblemsFollow this chart for more information about problems that can cause ankle pain. Our trusted Symptom Checker is written and ...
Human Reliability Assessment (HRA) is conducted on the unspoken premise that 'human error' is a meaningful concept and that it can be associated with individual actions. The basis for this assumption it found in the origin of HRA, as a necessary extension of PSA to account for the impact of failures emanating from human actions. Although it was natural to model HRA on PSA, a large number of studies have shown that the premises are wrong, specifically that human and technological functions cannot be decomposed in the same manner. The general experience from accident studies also indicates that action failures are a function of the context, and that it is the variability of the context rather than the 'human error probability' that is the much sought for signal. Accepting this will have significant consequences for the way in which HRA, and ultimately also PSA, should be pursued
The use of Human Reliability Analysis (HRA) to identify and resolve human factors issues has significantly increased over the past two years. Today, utilities, research institutions, consulting firms, and the regulatory agency have found a common application of HRA tools and Probabilistic Risk Assessment (PRA). The ''1985 IEEE Third Conference on Human Factors and Power Plants'' devoted three sessions to the discussion of these applications and a review of the insights so gained. This paper summarizes the three sessions and presents those common conclusions that were discussed during the meeting. The paper concludes that session participants supported the use of an adequately documented ''living PRA'' to address human factors issues in design and procedural changes, regulatory compliance, and training and that the techniques can produce cost effective qualitative results that are complementary to more classical human factors methods
Apostolakis, G.E.; Cave, L.; Epler, E.P.; Ilberg, D.; Kastenberg, W.E.
The reliability program which was established for the Clinch River Fast Breeder Reactor Project is reviewed. The review reveals several problems and uncertainties which need to be appropriately addressed by the reliability program. The main problems are stated to be the choice of the reliability criterion, the prospects of attaining high reliabilities of systems and critical structures, the role of external phenomena as well as the verification by tests and the reliability assurance program
Joachim I. Krueger
Full Text Available The practice of Significance Testing (ST remains widespread in psychological science despite continual criticism of its flaws and abuses. Using simulation experiments, we address four concerns about ST and for two of these we compare ST’s performance with prominent alternatives. We find the following: First, the 'p' values delivered by ST predict the posterior probability of the tested hypothesis well under many research conditions. Second, low 'p' values support inductive inferences because they are most likely to occur when the tested hypothesis is false. Third, 'p' values track likelihood ratios without raising the uncertainties of relative inference. Fourth, 'p' values predict the replicability of research findings better than confidence intervals do. Given these results, we conclude that 'p' values may be used judiciously as a heuristic tool for inductive inference. Yet, 'p' values cannot bear the full burden of inference. We encourage researchers to be flexible in their selection and use of statistical methods.
Nirula, Ram; Talmor, Daniel; Brasel, Karen
Identification of motor vehicle crash (MVC) characteristics associated with thoracoabdominal injury would advance the development of automatic crash notification systems (ACNS) by improving triage and response times. Our objective was to determine the relationships between MVC characteristics and thoracoabdominal trauma to develop a torso injury probability model. Drivers involved in crashes from 1993 to 2001 within the National Automotive Sampling System were reviewed. Relationships between torso injury and MVC characteristics were assessed using multivariate logistic regression. Receiver operating characteristic curves were used to compare the model to current ACNS models. There were a total of 56,466 drivers. Age, ejection, braking, avoidance, velocity, restraints, passenger-side impact, rollover, and vehicle weight and type were associated with injury (p < 0.05). The area under the receiver operating characteristic curve (83.9) was significantly greater than current ACNS models. We have developed a thoracoabdominal injury probability model that may improve patient triage when used with ACNS.
This paper briefly describes the wellhead prices of natural gas compared to crude oil over the past 70 years. Although natural gas prices have never reached price parity with crude oil, the relative value of a gas BTU has been increasing. It is one of the reasons that the total amount of money coming from natural gas wells is becoming more significant. From 1920 to 1955 the revenue at the wellhead for natural gas was only about 10% of the money received by producers. Most of the money needed for exploration, development, and production came from crude oil. At present, however, over 40% of the money from the upstream portion of the petroleum industry is from natural gas. As a result, in a few short years natural gas may become 50% of the money revenues generated from wellhead production facilities
Mathematical foundations of risk analysis are addressed. The importance of having the same probability space in order to compare different experiments is pointed out. Then the following topics are discussed: consequences as random variables with infinite expectations; the phenomenon of rare events; series-parallel systems and different kinds of randomness that could be imposed on such systems; and the problem of consensus of estimates of expert opinion
Mazzuchi, Thomas; Singpurwalla, Nozer
In this volume consideration was given to more advanced theoretical approaches and novel applications of reliability to ensure that topics having a futuristic impact were specifically included. Topics like finance, forensics, information, and orthopedics, as well as the more traditional reliability topics were purposefully undertaken to make this collection different from the existing books in reliability. The entries have been categorized into seven parts, each emphasizing a theme that seems poised for the future development of reliability as an academic discipline with relevance. The seven parts are networks and systems; recurrent events; information and design; failure rate function and burn-in; software reliability and random environments; reliability in composites and orthopedics, and reliability in finance and forensics. Embedded within the above are some of the other currently active topics such as causality, cascading, exchangeability, expert testimony, hierarchical modeling, optimization and survival...
I have developed two highly efficient codes to automate analyses of emission line nebulae. The tools place particular emphasis on the propagation of uncertainties. The first tool, ALFA, uses a genetic algorithm to rapidly optimise the parameters of gaussian fits to line profiles. It can fit emission line spectra of arbitrary resolution, wavelength range and depth, with no user input at all. It is well suited to highly multiplexed spectroscopy such as that now being carried out with instruments such as MUSE at the VLT. The second tool, NEAT, carries out a full analysis of emission line fluxes, robustly propagating uncertainties using a Monte Carlo technique.Using these tools, I have found that considerable biases can be introduced into abundance determinations if the uncertainty distribution of emission lines is not well characterised. For weak lines, normally distributed uncertainties are generally assumed, though it is incorrect to do so, and significant biases can result. I discuss observational evidence of these biases. The two new codes contain routines to correctly characterise the probability distributions, giving more reliable results in analyses of emission line nebulae.
Xia, J; Li, Chunbo
The severe and long-lasting symptoms of schizophrenia are often the cause of severe disability. Environmental stress such as life events and the practical problems people face in their daily can worsen the symptoms of schizophrenia. Deficits in problem solving skills in people with schizophrenia affect their independent and interpersonal functioning and impair their quality of life. As a result, therapies such as problem solving therapy have been developed to improve problem solving skills for people with schizophrenia. To review the effectiveness of problem solving therapy compared with other comparable therapies or routine care for those with schizophrenia. We searched the Cochrane Schizophrenia Group's Register (September 2006), which is based on regular searches of BIOSIS, CENTRAL, CINAHL, EMBASE, MEDLINE and PsycINFO. We inspected references of all identified studies for further trials. We included all clinical randomised trials comparing problem solving therapy with other comparable therapies or routine care. We extracted data independently. For homogenous dichotomous data we calculated random effects, relative risk (RR), 95% confidence intervals (CI) and, where appropriate, numbers needed to treat (NNT) on an intention-to-treat basis. For continuous data, we calculated weighted mean differences (WMD) using a random effects statistical model. We included only three small trials (n=52) that evaluated problem solving versus routine care, coping skills training or non-specific interaction. Inadequate reporting of data rendered many outcomes unusable. We were unable to undertake meta-analysis. Overall results were limited and inconclusive with no significant differences between treatment groups for hospital admission, mental state, behaviour, social skills or leaving the study early. No data were presented for global state, quality of life or satisfaction. We found insufficient evidence to confirm or refute the benefits of problem solving therapy as an additional
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
The reference book covers four sections. Apart from the fundamental aspects of the reliability problem, of risk and safety and the relevant criteria with regard to reliability, the material presented explains reliability in terms of maintenance, logistics and availability, and presents procedures for reliability assessment and determination of factors influencing the reliability, together with suggestions for systems technical integration. The reliability assessment consists of diagnostic and prognostic analyses. The section on factors influencing reliability discusses aspects of organisational structures, programme planning and control, and critical activities. (DG) [de
Weaver, Robert R
Leaders in health care increasingly recognize that improving health care quality and safety requires developing an organizational culture that fosters high reliability and continuous process improvement. For various reasons, a reliability-seeking culture is lacking in most health care settings. Developing a reliability-seeking culture requires leaders' sustained commitment to reliability principles using key mechanisms to embed those principles widely in the organization. The aim of this study was to examine how key mechanisms used by a primary care practice (PCP) might foster a reliability-seeking, system-oriented organizational culture. A case study approach was used to investigate the PCP's reliability culture. The study examined four cultural artifacts used to embed reliability-seeking principles across the organization: leadership statements, decision support tools, and two organizational processes. To decipher their effects on reliability, the study relied on observations of work patterns and the tools' use, interactions during morning huddles and process improvement meetings, interviews with clinical and office staff, and a "collective mindfulness" questionnaire. The five reliability principles framed the data analysis. Leadership statements articulated principles that oriented the PCP toward a reliability-seeking culture of care. Reliability principles became embedded in the everyday discourse and actions through the use of "problem knowledge coupler" decision support tools and daily "huddles." Practitioners and staff were encouraged to report unexpected events or close calls that arose and which often initiated a formal "process change" used to adjust routines and prevent adverse events from recurring. Activities that foster reliable patient care became part of the taken-for-granted routine at the PCP. The analysis illustrates the role leadership, tools, and organizational processes play in developing and embedding a reliable-seeking culture across an
Pereira, Carlos A. de B.; Stern, Julio Michael; Wechsler, Sergio
The Full Bayesian Significance Test, FBST, is extensively reviewed. Its test statistic, a genuine Bayesian measure of evidence, is discussed in detail. Its behavior in some problems of statistical inference like testing for independence in contingency tables is discussed.
Supe, S.J.; Nagalaxmi, K.V.; Meenakshi, L.
In the practice of radiotherapy, various concepts like NSD, CRE, TDF, and BIR are being used to evaluate the biological effectiveness of the treatment schedules on the normal tissues. This has been accepted as the tolerance of the normal tissue is the limiting factor in the treatment of cancers. At present when various schedules are tried, attention is therefore paid to the biological damage of the normal tissues only and it is expected that the damage to the cancerous tissues would be extensive enough to control the cancer. Attempt is made in the present work to evaluate the concent of tumor significant dose (TSD) which will represent the damage to the cancerous tissue. Strandquist in the analysis of a large number of cases of squamous cell carcinoma found that for the 5 fraction/week treatment, the total dose required to bring about the same damage for the cancerous tissue is proportional to T/sup -0.22/, where T is the overall time over which the dose is delivered. Using this finding the TSD was defined as DxN/sup -p/xT/sup -q/, where D is the total dose, N the number of fractions, T the overall time p and q are the exponents to be suitably chosen. The values of p and q are adjusted such that p+q< or =0.24, and p varies from 0.0 to 0.24 and q varies from 0.0 to 0.22. Cases of cancer of cervix uteri treated between 1978 and 1980 in the V. N. Cancer Centre, Kuppuswamy Naidu Memorial Hospital, Coimbatore, India were analyzed on the basis of these formulations. These data, coupled with the clinical experience, were used for choice of a formula for the TSD. Further, the dose schedules used in the British Institute of Radiology fraction- ation studies were also used to propose that the tumor significant dose is represented by DxN/sup -0.18/xT/sup -0.06/
Ph D Student Roman Mihaela
Full Text Available The concept of "public accountability" is a challenge for political science as a new concept in this area in full debate and developement ,both in theory and practice. This paper is a theoretical approach of displaying some definitions, relevant meanings and significance odf the concept in political science. The importance of this concept is that although originally it was used as a tool to improve effectiveness and eficiency of public governance, it has gradually become a purpose it itself. "Accountability" has become an image of good governance first in the United States of America then in the European Union.Nevertheless,the concept is vaguely defined and provides ambiguous images of good governance.This paper begins with the presentation of some general meanings of the concept as they emerge from specialized dictionaries and ancyclopaedies and continues with the meanings developed in political science. The concept of "public accontability" is rooted in economics and management literature,becoming increasingly relevant in today's political science both in theory and discourse as well as in practice in formulating and evaluating public policies. A first conclusin that emerges from, the analysis of the evolution of this term is that it requires a conceptual clarification in political science. A clear definition will then enable an appropriate model of proving the system of public accountability in formulating and assessing public policies, in order to implement a system of assessment and monitoring thereof.
Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara
The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis
Cao, Dingzhou; Murat, Alper; Chinnam, Ratna Babu
This paper proposes a decomposition-based approach to exactly solve the multi-objective Redundancy Allocation Problem for series-parallel systems. Redundancy allocation problem is a form of reliability optimization and has been the subject of many prior studies. The majority of these earlier studies treat redundancy allocation problem as a single objective problem maximizing the system reliability or minimizing the cost given certain constraints. The few studies that treated redundancy allocation problem as a multi-objective optimization problem relied on meta-heuristic solution approaches. However, meta-heuristic approaches have significant limitations: they do not guarantee that Pareto points are optimal and, more importantly, they may not identify all the Pareto-optimal points. In this paper, we treat redundancy allocation problem as a multi-objective problem, as is typical in practice. We decompose the original problem into several multi-objective sub-problems, efficiently and exactly solve sub-problems, and then systematically combine the solutions. The decomposition-based approach can efficiently generate all the Pareto-optimal solutions for redundancy allocation problems. Experimental results demonstrate the effectiveness and efficiency of the proposed method over meta-heuristic methods on a numerical example taken from the literature.
Jo A. Ziegler
The purpose of this calculation is to identify radionuclides that are significant to offsite doses from potential preclosure events for spent nuclear fuel (SNF) and high-level radioactive waste expected to be received at the potential Monitored Geologic Repository (MGR). In this calculation, high-level radioactive waste is included in references to DOE SNF. A previous document, ''DOE SNF DBE Offsite Dose Calculations'' (CRWMS M&O 1999b), calculated the source terms and offsite doses for Department of Energy (DOE) and Naval SNF for use in design basis event analyses. This calculation reproduces only DOE SNF work (i.e., no naval SNF work is included in this calculation) created in ''DOE SNF DBE Offsite Dose Calculations'' and expands the calculation to include DOE SNF expected to produce a high dose consequence (even though the quantity of the SNF is expected to be small) and SNF owned by commercial nuclear power producers. The calculation does not address any specific off-normal/DBE event scenarios for receiving, handling, or packaging of SNF. The results of this calculation are developed for comparative analysis to establish the important radionuclides and do not represent the final source terms to be used for license application. This calculation will be used as input to preclosure safety analyses and is performed in accordance with procedure AP-3.12Q, ''Calculations'', and is subject to the requirements of DOE/RW-0333P, ''Quality Assurance Requirements and Description'' (DOE 2000) as determined by the activity evaluation contained in ''Technical Work Plan for: Preclosure Safety Analysis, TWP-MGR-SE-000010'' (CRWMS M&O 2000b) in accordance with procedure AP-2.21Q, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities''.
Normand, J.; Charon, M.
Concern for obtaining high-quality products which will function properly when required to do so is nothing new - it is one manifestation of a conscientious attitude to work. However, the complexity and cost of equipment and the consequences of even temporary immobilization are such that it has become necessary to make special arrangements for obtaining high-quality products and examining what one has obtained. Each unit within an enterprise must examine its own work or arrange for it to be examined; a unit whose specific task is quality assurance is responsible for overall checking, but does not relieve other units of their responsibility. Quality assurance is a form of mutual assistance within an enterprise, designed to remove the causes of faults as far as possible. It begins very early in a project and continues through the ordering stage, construction, start-up trials and operation. Quality and hence reliability are the direct result of what is done at all stages of a project. They depend on constant attention to detail, for even a minor piece of poor workmanship can, in the case of an essential item of equipment, give rise to serious operational difficulties
Sitzman, Thomas J; Allori, Alexander C; Matic, Damir B; Beals, Stephen P; Fisher, David M; Samson, Thomas D; Marcus, Jeffrey R; Tse, Raymond W
Objective Oronasal fistula is an important complication of cleft palate repair that is frequently used to evaluate surgical quality, yet reliability of fistula classification has never been examined. The objective of this study was to determine the reliability of oronasal fistula classification both within individual surgeons and between multiple surgeons. Design Using intraoral photographs of children with repaired cleft palate, surgeons rated the location of palatal fistulae using the Pittsburgh Fistula Classification System. Intrarater and interrater reliability scores were calculated for each region of the palate. Participants Eight cleft surgeons rated photographs obtained from 29 children. Results Within individual surgeons reliability for each region of the Pittsburgh classification ranged from moderate to almost perfect (κ = .60-.96). By contrast, reliability between surgeons was lower, ranging from fair to substantial (κ = .23-.70). Between-surgeon reliability was lowest for the junction of the soft and hard palates (κ = .23). Within-surgeon and between-surgeon reliability were almost perfect for the more general classification of fistula in the secondary palate (κ = .95 and κ = .83, respectively). Conclusions This is the first reliability study of fistula classification. We show that the Pittsburgh Fistula Classification System is reliable when used by an individual surgeon, but less reliable when used among multiple surgeons. Comparisons of fistula occurrence among surgeons may be subject to less bias if they use the more general classification of "presence or absence of fistula of the secondary palate" rather than the Pittsburgh Fistula Classification System.
Lee, Tae Hun
This book deals with analysis of Bayesian like conditional probability and independence, total probability law and Bayes formula, analysis of Bayesian, simulation including summary and examples, random number generation and occurrence of probability variable, Markov chain such as property of Markovian, market share problem, Markov chain and safe state probability, transition matrix, and absorption of Markov chain, and queueing model like M/M/1/3, PASTA and Little's law and independent trial and poisson process and Balance equation of CTMC.
Lee, Tae Hun
This book deals with analysis of Bayesian like conditional probability and independence, total probability law and Bayes formula, analysis of Bayesian, simulation including summary and examples, random number generation and occurrence of probability variable, Markov chain such as property of Markovian, market share problem, Markov chain and safe state probability, transition matrix, and absorption of Markov chain, and queueing model like M/M/1/3, PASTA and Little's law and independent trial and poisson process and Balance equation of CTMC.
Kostandyan, Erik; Sørensen, John Dalsgaard
Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...... identification. Application of the proposed method can be found in many real world systems....
Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.
Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990
A system where the components and system itself are allowed to have a number of performance levels is called the Multi-state system (MSS). A multi-state node network (MNN) is a generalization of the MSS without satisfying the flow conservation law. Evaluating the MNN reliability arises at the design and exploitation stage of many types of technical systems. Up to now, the known existing methods can only evaluate a special MNN reliability called the multi-state node acyclic network (MNAN) in which no cyclic is allowed. However, no method exists for evaluating the general MNN reliability. The main purpose of this article is to show first that each MNN reliability can be solved using any the traditional binary-state networks (TBSN) reliability algorithm with a special code for the state probability. A simple heuristic SDP algorithm based on minimal cuts (MC) for estimating the MNN reliability is presented as an example to show how the TBSN reliability algorithm is revised to solve the MNN reliability problem. To the author's knowledge, this study is the first to discuss the relationships between MNN and TBSN and also the first to present methods to solve the exact and approximated MNN reliability. One example is illustrated to show how the exact MNN reliability is obtained using the proposed algorithm
Lin, Zu-Liang; Huang, Yeu-Shiang; Fang, Chih-Chiang
In general, a non-periodic condition-based PM policy with different condition variables is often more effective than a periodic age-based policy for deteriorating complex repairable systems. In this study, system reliability is estimated and used as the condition variable, and three reliability-based PM models are then developed with consideration of different scenarios which can assist in evaluating the maintenance cost for each scenario. The proposed approach provides the optimal reliability thresholds and PM schedules in advance by which the system availability and quality can be ensured and the organizational resources can be well prepared and managed. The results of the sensitivity anlysis indicate that PM activities performed at a high reliability threshold can not only significantly improve the system availability but also efficiently extend the system lifetime, although such a PM strategy is more costly than that for a low reliabiltiy threshold. The optimal reliability threshold increases along with the number of PM activities to prevent future breakdowns caused by severe deterioration, and thus substantially reduces repair costs. - Highlights: • The PM problems for repairable deteriorating systems are formulated. • The structural properties of the proposed PM models are investigated. • The corresponding algorithms to find the optimal PM strategies are provided. • Imperfect PM activities are allowed to reduce the occurences of breakdowns. • Provide managers with insights about the critical factors in the planning stage
This book shows how to build in, evaluate, and demonstrate reliability and availability of components, equipment, systems. It presents the state-of-the-art of reliability engineering, both in theory and practice, and is based on the author's 30 years experience in this field, half in industry and half as Professor of Reliability Engineering at the ETH, Zurich. The structure of the book allows rapid access to practical results. Besides extensions to cost models and approximate expressions, new in this edition are investigations on common cause failures, phased-mission systems, availability demonstration and estimation, confidence limits at system level, trend tests for early failures or wearout, as well as a review of maintenance strategies, an introduction to Petri nets and dynamic FTA, and a set of problems for home-work. Methods and tools are given in a way that they can be tailored to cover different reliability requirement levels and be used for safety analysis as well. This book is a textbook establishing a link between theory and practice, with a large number of tables, figures, and examples to support the practical aspects. (orig.)
Ross-Ross, P.A.; Metcalfe, R.
The CANDU reactor has achieved worldwide distinction because of its reliable performance. To achieve this, special attention was given to the reliability and maintainability of components in the heavy water circuits. Development programs were initiated early in the history of the CANDU reactor to improve the effectiveness of pump seals, valves, and static seals because of unacceptable performance of the commercial equipment then available. As a result, pump seals with a five year life now appear achievable, and valves and static seals are no longer a significant concern in CANDU reactors. Increasing effort is being given remotely operated tools and fabrication systems for radioactive environments
Embrey, D.E.; Lucas, D.A.
Human reliability assessment (HRA) is used within Probabilistic Risk Assessment (PRA) to identify the human errors (both omission and commission) which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. There exist a variey of HRA techniques and the selection of an appropriate one is often difficult. This paper reviews a number of available HRA techniques and discusses their strengths and weaknesses. The techniques reviewed include: decompositional methods, time-reliability curves and systematic expert judgement techniques. (orig.)
Christensen, Anders Bøggild; Rasmussen, Tove; Bundesen, Peter Verner
Sociale problemer kan betragtes som selve udgangspunktet for socialt arbejde, hvor ambitionen er at råde bod på problemerne og sikre, at udsatte borgere får en bedre tilværelse. Det betyder også, at diskussionen af sociale problemer er afgørende for den sociale grundfaglighed. I denne bog sætter en...... række fagfolk på tværs af det danske socialfaglige felt fokus på sociale problemer. Det diskuteres, hvad vi overhovedet forstår ved sociale problemer, hvordan de opstår, hvilke konsekvenser de har, og ikke mindst hvordan man som fagprofessionel håndterer sociale problemer i det daglige arbejde. Bogen er...... skrevet som lærebog til professionsuddannelser, hvor sociale problemer udgør en dimension, bl.a. socialrådgiver-, pædagog- og sygeplejerskeuddannelserne....
Basic aspects of V.A. Mashin's article on NPP operator support computerized systems and problems of man-machine interrelation are analyzed. Sharing in the significant degree Mashin's point of view the author of this article considers that the most important aspects of this problem consist in dividing the responsibility for NPP safe operation between all participants of the NPP creation and operation and in the area of practical experience in assuring control functions reliability
Toerroenen, K.; Aho-Mantila, I.
This report is the final technical report of the fracture mechanics part of the Reliability of Reactor Materials Programme, which was carried out at the Technical Research Centre of Finland (VTT) through the years 1981 to 1983. Research and development work was carried out in five major areas, viz. statistical treatment and modelling of cleavage fracture, crack arrest, ductile fracture, instrumented impact testing as well as comparison of numerical and experimental elastic-plastic fracture mechanics. In the area of cleavage fracture the critical variables affecting the fracture of steels are considered in the frames of a statistical model, so called WST-model. Comparison of fracture toughness values predicted by the model and corresponding experimental values shows excellent agreement for a variety of microstructures. different posibilities for using the model are discussed. The development work in the area of crack arrest testing was concentrated in the crack starter properties, test arrangement and computer control. A computerized elastic-plastic fracture testing method with a variety of test specimen geometries in a large temperature range was developed for a routine stage. Ductile fracture characteristics of reactor pressure vessel steel A533B and comparable weld material are given. The features of a new, patented instrumented impact tester are described. Experimental and theoretical comparisons between the new and conventional testers indicated clearly the improvements achieved with the new tester. A comparison of numerical and experimental elastic-plastic fracture mechanics capabilities at VTT was carried out. The comparison consisted of two-dimensional linear elastic as well as elastic-plastic finite element analysis of four specimen geometries and equivalent experimental tests. (author)
This report investigates, through several examples from the field, the reliability of electronic units in a broader sense. That is, it treats not just random parts failure, but also inadequate reliability design and (externally and internally) induced failures. The report is not meant to be merely an indication of the state of the art for the reliability prediction methods we know, but also as a contribution to the investigation of man-machine interplay in the operation and repair of electronic equipment. The report firmly links electronics reliability to safety and risk analyses approaches with a broader, system oriented view of reliability prediction and with postfailure stress analysis. It is intended to reveal, in a qualitative manner, the existence of symptom and cause patterns. It provides a background for further investigations to identify the detailed mechanisms of the faults and the remedical actions and precautions for achieving cost effective reliability. (author)
Sørensen, John Dalsgaard
Reduction of cost of energy for wind turbines are very important in order to make wind energy competitive compared to other energy sources. Therefore the turbine components should be designed to have sufficient reliability but also not be too costly (and safe). This paper presents models...... for uncertainty modeling and reliability assessment of especially the structural components such as tower, blades, substructure and foundation. But since the function of a wind turbine is highly dependent on many electrical and mechanical components as well as a control system also reliability aspects...... of these components are discussed and it is described how there reliability influences the reliability of the structural components. Two illustrative examples are presented considering uncertainty modeling, reliability assessment and calibration of partial safety factors for structural wind turbine components exposed...
Kondrat'ev, A.N.; Kosarev, Yu.A.; Yulikov, E.A.
In this paper, problems of transportation of nuclear spent fuel to reprocessing plants are discussed. The solutions proposed are directed toward the achievement of the transportation as economic and safe as possible. The increase of the nuclear power plants number in the USSR and the great distances between these plants and the reprocessing plants involve an intensification of the spent fuel transportation. Higher burnup and holdup time reduction cause the necessity of more bulky casks. In this connection, the economic problems become still more important. One of the ways of the problem solution is the development of rational and cheap cask designs. Also, the enforcement in the world of the environmental and personnel health protection requires to increase the transportation reliability and safety. The paper summarizes safe transportation rules with clarifying the following questions: the increase of the transport unit quantity of the spent fuel; rational shipment organization that minimizes vehicle turnover cycle duration; development of the reliable calculation methods to determine strength, thermal conditions and nuclear safety of transport packaging as applied to the vehicles of high capacity; maximum unification of vehicles, calculation methods and documents; and cask testing on models and in pilot scale on specific test rigs to assure that they meet the international safe fuel shipment rules. Besides, some considerations on the choice and use of structural materials for casks are given, and problems of manufacturing such casks from uranium and lead are considered, as well as problems of the development of fireproof shells, control instrumentation, vehicles decontamination, etc. All the problems are considered from the point of view of normal and accidental shipment conditions. Conclusions are presented [ru
Benezech , Vincent; Coulombel , Nicolas
International audience; This paper studies the impact of service frequency and reliability on the choice of departure time and the travel cost of transit users. When the user has (α, β, γ) scheduling preferences, we show that the optimal head start decreases with service reliability, as expected. It does not necessarily decrease with service frequency, however. We derive the value of service headway (VoSH) and the value of service reliability (VoSR), which measure the marginal effect on the e...
Kanade, Varun; Thaler, Justin
We study several questions in the reliable agnostic learning framework of Kalai et al. (2009), which captures learning tasks in which one type of error is costlier than others. A positive reliable classifier is one that makes no false positive errors. The goal in the positive reliable agnostic framework is to output a hypothesis with the following properties: (i) its false positive error rate is at most $\\epsilon$, (ii) its false negative error rate is at most $\\epsilon$ more than that of the...
Gur, Gozde; Turgut, Elif; Dilek, Burcu; Baltaci, Gul; Bek, Nilgun; Yakut, Yavuz
The present study tested the reliability and validity of the Turkish version of the visual analog scale foot and ankle (VAS-FA) among healthy subjects and patients with foot problems. A total of 128 participants, 65 healthy subjects and 63 patients with foot problems, were evaluated. The VAS-FA was translated into Turkish and administered to the 128 subjects on 2 separate occasions with a 5-day interval. The test-retest reliability and internal consistency were assessed with the intraclass correlation coefficient and Cronbach's α. The validity was assessed using the correlations with Turkish versions of the Foot Function Index, the Foot and Ankle Outcome Score, and the Short-Form 36-item Health Survey. A statistically significant difference was found between the healthy group and the patient group in the overall score and subscale scores of the VAS-FA (p Foot Function Index, Foot and Ankle Outcome Score, and Short-Form 36-item Health Survey scores in the healthy and patient groups both. The Turkish version of the VAS-FA is sensitive enough to distinguish foot and ankle-specific pathologic conditions from asymptomatic conditions. The Turkish version of the VAS-FA is a reliable and valid method and can be used for foot-related problems. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Pourkarim Guilani, Pedram; Sharifi, Mani; Niaki, S.T.A.; Zaretalab, Arash
In multi-state systems (MSS) reliability problems, it is assumed that the components of each subsystem have different performance rates with certain probabilities. This leads into extensive computational efforts involved in using the commonly employed universal generation function (UGF) and the recursive algorithm to obtain reliability of systems consisting of a large number of components. This research deals with evaluating non-repairable three-state systems reliability and proposes a novel method based on a Markov process for which an appropriate state definition is provided. It is shown that solving the derived differential equations significantly reduces the computational time compared to the UGF and the recursive algorithm. - Highlights: • Reliability evaluation of a non-repairable three-state systems is aimed. • A novel method based on a Markov process is proposed. • An appropriate state definition is provided. • Computational time is significantly less compared to the ones in the UGF and the recursive methods
Gaston, Ryan; Feist, Rebekah; Yeung, Simon; Hus, Mike; Bernius, Mark; Langlois, Marc; Bury, Scott; Granata, Jennifer; Quintana, Michael; Carlson, Carl; Sarakakis, Georgios; Ogden, Douglas; Mettas, Adamantios
Despite significant growth in photovoltaics (PV) over the last few years, only approximately 1.07 billion kWhr of electricity is estimated to have been generated from PV in the US during 2008, or 0.27% of total electrical generation. PV market penetration is set for a paradigm shift, as fluctuating hydrocarbon prices and an acknowledgement of the environmental impacts associated with their use, combined with breakthrough new PV technologies, such as thin-film and BIPV, are driving the cost of energy generated with PV to parity or cost advantage versus more traditional forms of energy generation. In addition to reaching cost parity with grid supplied power, a key to the long-term success of PV as a viable energy alternative is the reliability of systems in the field. New technologies may or may not have the same failure modes as previous technologies. Reliability testing and product lifetime issues continue to be one of the key bottlenecks in the rapid commercialization of PV technologies today. In this paper, we highlight the critical need for moving away from relying on traditional qualification and safety tests as a measure of reliability and focus instead on designing for reliability and its integration into the product development process. A drive towards quantitative predictive accelerated testing is emphasized and an industrial collaboration model addressing reliability challenges is proposed.
Chen, Li; Tan, Jian Guo; Zhou, Jian Feng; Yang, Xu; Du, Yang; Wang, Fang Ping
to develop an in vitro shade-measuring model to evaluate the reliability and accuracy of the Crystaleye spectrophotometric system, a newly developed spectrophotometer. four shade guides, VITA Classical, VITA 3D-Master, Chromascop and Vintage Halo NCC, were measured with the Crystaleye spectrophotometer in a standardised model, ten times for 107 shade tabs. The shade-matching results and the CIE L*a*b* values of the cervical, body and incisal regions for each measurement were automatically analysed using the supporting software. Reliability and accuracy were calculated for each shade tab both in percentage and in colour difference (ΔE). Difference was analysed by one-way ANOVA in the cervical, body and incisal regions. range of reliability was 88.81% to 98.97% and 0.13 to 0.24 ΔE units, and that of accuracy was 44.05% to 91.25% and 1.03 to 1.89 ΔE units. Significant differences in reliability and accuracy were found between the body region and the cervical and incisal regions. Comparisons made among regions and shade guides revealed that evaluation in ΔE was prone to disclose the differences. measurements with the Crystaleye spectrophotometer had similar, high reliability in different shade guides and regions, indicating predictable repeated measurements. Accuracy in the body region was high and less variable compared with the cervical and incisal regions.
The objective of this study is to show which analysis requirements are associated with the claim that a reliability analysis, as practised at present, can provide a quantitative risk assessment in absolute terms. The question arises of whether this claim can be substantiated without direct access to the specialist technical departments of a manufacturer and to the multifarious detail information available in these departments. The individual problems arising in the course of such an analysis are discussed on the example of a reliability analysis of a core flooding system. The questions discussed relate to analysis organisation, sequence analysis, fault-tree analysis, and the treatment of operational processes superimposed on the failure and repair processes. (orig.) [de
The extent of reliability of the finite element method for analysis of nuclear reactor structures, and that of reactor vessels in particular and the need for the engineer to guard against the pitfalls that may arise out of both physical and mathematical models have been high-lighted. A systematic way of checking the model to obtain reasonably accurate solutions is presented. Quite often sophisticated elements are suggested for specific design and stress concentration problems. The desirability or otherwise of these elements, their scope and utility vis-a-vis the use of large stack of conventional elements are discussed from the view point of stress analysts. The methods of obtaining a check on the reliability of the finite element solutions either through modelling changes or an extrapolation technique are discussed. (author)
Mitchell, L.A.; Osgood, C.; Radcliffe, S.J.
The standard methods of reliability analysis can only be applied if valid failure statistics are available. In a developing technology the statistics which have been accumulated, over many years of conventional experience, are often rendered useless by environmental effects. Thus new data, which take account of the new environment, are required. This paper discusses the problem of optimizing the acquisition of these data when time-scales and resources are limited. It is concluded that the most fruitful strategy in assessing the reliability of mechanisms is to study the failures of individual joints whilst developing, where necessary, analytical tools to facilitate the use of these data. The approach is illustrated by examples from the field of tribology. Failures of rolling element bearings in moist, high-pressure carbon dioxide illustrate the important effects of apparently minor changes in the environment. New analytical techniques are developed from a study of friction failures in sliding joints. (author)
... Read MoreDepression in Children and TeensRead MoreBMI Calculator Hearing ProblemsLoss in the ability to hear or discriminate ... This flow chart will help direct you if hearing loss is a problem for you or a ...
Full Text Available The paper is devoted to creation of an effective system of mapping at all levels of tourist-excursion functioning that will boost the promotion of tourist product in a domestic and foreign tourist market. The State Scientific - Production Enterprise «Kartographia» actively participates in cartographic tourism provision by producing travel pieces, survey, large-scale, route maps, atlases, travel guides, city plans. They produce maps covering different content of the territory of Ukraine, its individual regions, cities interested in tourist excursions. The list and scope of cartographic products has been prepared for publication and released for the last five years. The development of new types of tourism encourages publishers to create various cartographic products for the needs of tourists guaranteeing high accuracy, reliability of information, ease of use. A variety of scientific and practical problems in tourism and excursion activities that are solved using maps and plans makes it difficult to determine the criteria for assessing their reliability. The author proposes to introduce the concept of «relevance» - as maps suitability to solving specific problems. The basis of the peer review is suitability of maps for the objective results release criteria: appropriateness of the target maps tasks (area, theme, destination; accuracy of given parameters (projection, scale, height interval; year according to the shooting of location or mapping; selection methods, methods of results measurement processing algorithm; availability of assistive devices (instrumentation, computer technology, simulation devices. These criteria provide the reliability and accuracy of the result as acceptable to consumers as possible. The author proposes a set of measures aimed at improving the content, quality and reliability of cartographic production.
Kang, I. S.; Cho, B. S.; Choi, M. J. [KOPEC, Yongin (Korea, Republic of)
Rapidly, digital technology is being widely applied in replacing analog component installed in existing plant and designing new nuclear power plant for control and monitoring system in Korea as well as in foreign countries. Even though many merits of digital technology, it is being faced with a new problem of reliability assurance. The studies for solving this problem are being performed vigorously in foreign countries. The reliability of KNGR Engineered Safety Features Component Control System (ESF-CCS), digital based I and C system, was analyzed to verify fulfillment of the ALWR EPRI-URD requirement for reliability analysis and eliminate hazards in design applied new technology. The qualitative analysis using FMEA and quantitative analysis using reliability block diagram were performed. The results of analyses are shown in this paper.
Sembiring, N.; Ginting, E.; Darnello, T.
Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.
This study investigated the relation between problem-solving strategies in the marital conflict and marital satisfaction. Four problem-solving strategies (Dialogue, Loyalty, Escalation of conflict and Withdrawal) were measured by the Problem-Solving Strategies Inventory, in two versions: self-report and report of partners' perceived behaviour. This measure refers to the concept of Rusbult, Johnson and Morrow, and meets high standards of reliability (alpha Cronbach from alpha = 0.78 to alpha = 0.94) and validity. Marital satisfaction was measured by Marriage Success Scale. The sample was composed of 147 marital couples. The study revealed that satisfied couples, in comparison with non-satisfied couples, tend to use constructive problem-solving strategies (Dialogue and Loyalty). They rarely use destructive strategies like Escalation of conflict or Withdrawal. Dialogue is the strategy connected with satisfaction in a most positive manner. These might be very important guidelines to couples' psychotherapy. Loyalty to oneself is a significant positive predictor of male satisfaction is also own Loyalty. The study shows that constructive attitudes are the most significant predictors of marriage satisfaction. It is therefore worth concentrating mostly on them in the psychotherapeutic process instead of eliminating destructive attitudes.
Barnes, M.; Bradley, P.A.; Brewer, M.A.
The increased usage and sophistication of computers applied to real time safety-related systems in the United Kingdom has spurred on the desire to provide a standard framework within which to assess dependable computing systems. Recent accidents and ensuing legislation have acted as a catalyst in this area. One particular aspect of dependable computing systems is that of software, which is usually designed to reduce risk at the system level, but which can increase risk if it is unreliable. Various organizations have recognized the problem of assessing the risk imposed to the system by unreliable software, and have taken initial steps to develop and use such assessment frameworks. This paper relates the approach of Consultancy Services of AEA Technology in developing a framework to assess the risk imposed by unreliable software. In addition, the paper discusses the experiences gained by Consultancy Services in applying the assessment framework to commercial and research projects. The framework is applicable to software used in safety applications, including proprietary software. Although the paper is written with Nuclear Reactor Safety applications in mind, the principles discussed can be applied to safety applications in all industries
Myers, J. R.; Yeghiazarian, L.
framework will significantly improve the efficiency and precision of sustainable watershed management strategies through providing a better understanding of how watershed characteristics and environmental parameters affect surface water quality and sustainability. With microbial contamination posing a serious threat to the availability of clean water across the world, it is necessary to develop a framework that evaluates the safety and sustainability of water systems in respect to non-point source fecal microbial contamination. The concept of water safety is closely related to the concept of failure in reliability theory. In water quality problems, the event of failure can be defined as the concentration of microbial contamination exceeding a certain standard for usability of water. It is pertinent in watershed management to know the likelihood of such an event of failure occurring at a particular point in space and time. Microbial fate and transport are driven by environmental processes taking place in complex, multi-component, interdependent environmental systems that are dynamic and spatially heterogeneous, which means these processes and therefore their influences upon microbial transport must be considered stochastic and variable through space and time. A physics-based stochastic model of microbial dynamics is presented that propagates uncertainty using a unique sampling method based on artificial neural networks to produce a correlation between watershed characteristics and spatial-temporal probabilistic patterns of microbial contamination. These results are used to address the question of water safety through several sustainability metrics: reliability, vulnerability, resilience and a composite sustainability index. System reliability is described uniquely though the temporal evolution of risk along watershed points or pathways. Probabilistic resilience describes how long the system is above a certain probability of failure, and the vulnerability metric describes how
Soares, Fabio L.; Campelo, Divanilson R.; Yan, Ying
This paper provides an overview of in-vehicle communication networks and addresses the challenges of providing reliability in automotive Ethernet in particular.......This paper provides an overview of in-vehicle communication networks and addresses the challenges of providing reliability in automotive Ethernet in particular....
In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...
Spinato, F.; Tavner, P.J.; Bussel, van G.J.W.; Koutoulakos, E.
We have investigated the reliability of more than 6000 modern onshore wind turbines and their subassemblies in Denmark and Germany over 11 years and particularly changes in reliability of generators, gearboxes and converters in a subset of 650 turbines in Schleswig Holstein, Germany. We first start
Presenting a solid overview of reliability engineering, this volume enables readers to build and evaluate the reliability of various components, equipment and systems. Current applications are presented, and the text itself is based on the author's 30 years of experience in the field.
Uythoven, J; Carlier, E; Castronuovo, F; Ducimetière, L; Gallet, E; Goddard, B; Magnin, N; Verhagen, H
The LHC Beam Dumping System is one of the vital elements of the LHC Machine Protection System and has to operate reliably every time a beam dump request is made. Detailed dependability calculations have been made, resulting in expected rates for the different system failure modes. A 'reliability run' of the whole system, installed in its final configuration in the LHC, has been made to discover infant mortality problems and to compare the occurrence of the measured failure modes with their calculations.
The EuReDatA Working Group produced a basic document that addressed many of the problems associated with the design of a suitable data collection scheme to achieve pre-defined objectives. The book that resulted from this work describes the need for reliability data, data sources and collection procedures, component description and classification, form design, data management, updating and checking procedures, the estimation of failure rates, availability and utilisation factors, and uncertainties in reliability parameters. (DG)
—Wireless 5G systems will not only be “4G, but faster”. One of the novel features discussed in relation to 5G is Ultra-Reliable Communication (URC), an operation mode not present in today’s wireless systems. URC refers to provision of certain level of communication service almost 100 % of the time....... Example URC applications include reliable cloud connectivity, critical connections for industrial automation and reliable wireless coordination among vehicles. This paper puts forward a systematic view on URC in 5G wireless systems. It starts by analyzing the fundamental mechanisms that constitute......-term URC (URC-S). The second dimension is represented by the type of reliability impairment that can affect the communication reliability in a given scenario. The main objective of this paper is to create the context for defining and solving the new engineering problems posed by URC in 5G....
Mrig, L. [ed.
This workshop was the sixth in a series of workshops sponsored by NREL/DOE under the general subject of photovoltaic testing and reliability during the period 1986--1993. PV performance and PV reliability are at least as important as PV cost, if not more. In the US, PV manufacturers, DOE laboratories, electric utilities, and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in the field were brought together to exchange the technical knowledge and field experience as related to current information in this evolving field of PV reliability. The papers presented here reflect this effort since the last workshop held in September, 1992. The topics covered include: cell and module characterization, module and system testing, durability and reliability, system field experience, and standards and codes.
This diploma thesis concentrates on problem posing from the students' point of view. Problem posing can be either seen as a teaching method which can be used in the class, or it can be used as a tool for researchers or teachers to assess the level of students' understanding of the topic. In my research, I compare three classes, one mathematics specialist class and two generalist classes, in their ability of problem posing. As an assessment tool it seemed that mathemathics specialists were abl...
Christensen, Kip W.; Martin, Loren
Interpersonal and cognitive skills, adaptability, and critical thinking can be developed through problem solving and cooperative learning in technology education. These skills have been identified as significant needs of the workplace as well as for functioning in society. (SK)
Mellal, Mohamed Arezki; Zio, Enrico
Modern industry requires components and systems with high reliability levels. In this paper, we address the system reliability optimization problem. A penalty guided stochastic fractal search approach is developed for solving reliability allocation, redundancy allocation, and reliability–redundancy allocation problems. Numerical results of ten case studies are presented as benchmark problems for highlighting the superiority of the proposed approach compared to others from literature. - Highlights: • System reliability optimization is investigated. • A penalty guided stochastic fractal search approach is developed. • Results of ten case studies are compared with previously published methods. • Performance of the approach is demonstrated.
Alberto W. S. Mello Jr
Full Text Available This work presents a methodology for determining the reliability of fracture control plans for structures subjected to cyclic loads. It considers the variability of the parameters involved in the problem, such as initial flaw and crack growth curve. The probability of detection (POD curve of the field non-destructive inspection method and the condition/environment are used as important factors for structural confidence. According to classical damage tolerance analysis (DTA, inspection intervals are based on detectable crack size and crack growth rate. However, all variables have uncertainties, which makes the final result totally stochastic. The material properties, flight loads, engineering tools and even the reliability of inspection methods are subject to uncertainties which can affect significantly the final maintenance schedule. The present methodology incorporates all the uncertainties in a simulation process, such as Monte Carlo, and establishes a relationship between the reliability of the overall maintenance program and the proposed inspection interval, forming a “cascade” chart. Due to the scatter, it also defines the confidence level of the “acceptable” risk. As an example, the damage tolerance analysis (DTA results are presented for the upper cockpit longeron splice bolt of the BAF upgraded F-5EM. In this case, two possibilities of inspection intervals were found: one that can be characterized as remote risk, with a probability of failure (integrity nonsuccess of 1 in 10 million, per flight hour; and other as extremely improbable, with a probability of nonsuccess of 1 in 1 billion, per flight hour, according to aviation standards. These two results are compared with the classical military airplane damage tolerance requirements.
Kozin, I.O.; Petersen, K.E.
Imprecise probabilities, being developed during the last two decades, offer a considerably more general theory having many advantages which make it very promising for reliability and safety analysis. The objective of the paper is to argue that imprecise probabilities are more appropriate tool for reliability and safety analysis, that they allow to model the behavior of nuclear industry objects more comprehensively and give a possibility to solve some problems unsolved in the framework of conventional approach. Furthermore, some specific examples are given from which we can see the usefulness of the tool for solving some reliability tasks
Souza, Bismarck A. de; Borges, Jose C.
This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs
A. R. Soofiabadi
Full Text Available This paper develops a method for nodal pricing and market clearing mechanism considering reliability of the system. The effects of components reliability on electricity price, market participants’ profit and system social welfare is considered. This paper considers reliability both for evaluation of market participant’s optimality as well as for fair pricing and market clearing mechanism. To achieve fair pricing, nodal price has been obtained through a two stage optimization problem and to achieve fair market clearing mechanism, comprehensive criteria has been introduced for optimality evaluation of market participant. Social welfare of the system and system efficiency are increased under proposed modified nodal pricing method.
Laverick, C.; Powell, J.; Hsieh, S.; Reich, M.; Botts, T.; Prodell, A.
This compilation adapts studies on safety and reliability in fusion magnets to similar problems in superconducting MHD magnets. MHD base load magnet requirements have been identified from recent Francis Bitter National Laboratory reports and that of other contracts. Information relevant to this subject in recent base load magnet design reports for AVCO - Everett Research Laboratories and Magnetic Corporation of America is included together with some viewpoints from a BNL workshop on structural analysis needed for superconducting coils in magnetic fusion energy. A summary of design codes used in large bubble chamber magnet design is also included
Carlson, D.D.; Gallup, D.R.; Kolaczkowski, A.M.; Kolb, G.J.; Stack, D.W.; Lofgren, E.; Horton, W.H.; Lobner, P.R.
This document presents procedures for conducting analyses of a scope similar to those performed in Phase II of the Interim Reliability Evaluation Program (IREP). It documents the current state of the art in performing the plant systems analysis portion of a probabilistic risk assessment. Insights gained into managing such an analysis are discussed. Step-by-step procedures and methodological guidance constitute the major portion of the document. While not to be viewed as a cookbook, the procedures set forth the principal steps in performing an IREP analysis. Guidance for resolving the problems encountered in previous analyses is offered. Numerous examples and representative products from previous analyses clarify the discussion
Bohoris, George A.
This paper summarizes part of the work carried out to date on seeking analytical solutions to the two-sample problem with censored data in the context of reliability and maintenance optimization applications. For this purpose, parametric two-sample tests for failure and censored reliability data are introduced and their applicability/effectiveness in common engineering problems is reviewed
Skovhus, Randi Boelskifte; Thomsen, Rie
This article introduces a method to critical reviews and explores the ways in which problems have been formulated in knowledge production on career guidance in Denmark over a 10-year period from 2004 to 2014. The method draws upon the work of Bacchi focussing on the ‘What's the problem represented...... to be’ (WPR) approach. Forty-nine empirical studies on Danish youth career guidance were included in the study. An analysis of the issues in focus resulted in nine problem categories. One of these, ‘targeting’, is analysed using the WPR approach. Finally, the article concludes that the WPR approach...... provides a constructive basis for a critical analysis and discussion of the collective empirical knowledge production on career guidance, stimulating awareness of problems and potential solutions among the career guidance community....
... For Consumers Consumer Information by Audience For Women Sleep Problems Share Tweet Linkedin Pin it More sharing ... 101 KB) En Español Medicines to Help You Sleep Tips for Better Sleep Basic Facts about Sleep ...
... such as sores, are very common. Follow this chart for more information about mouth problems in adults. ... cancers. See your dentist if sharp or rough teeth or dental work are causing irritation. Start OverDiagnosisThis ...
... our e-newsletter! Aging & Health A to Z Kidney Problems Basic Facts & Information The kidneys are two ... kidney (renal) diseases are called nephrologists . What are Kidney Diseases? For about one-third of older people, ...
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Reliability Standards... RELIABILITY STANDARDS § 39.5 Reliability Standards. (a) The Electric Reliability Organization shall file each Reliability Standard or modification to a Reliability Standard that it proposes to be made effective under...
Parsells, R.F.; Howard, H.P.
As TFTR approaches experiments in the Q=1 regime, machine reliability becomes a major variable in achieving experimental objectives. This paper describes the methods used to quantify current reliability levels, levels required for D-T operations, proposed methods for reliability growth and improvement, and tracking of reliability performance in that growth. Included in this scope are data collection techniques and short comings, bounding current reliability on the upper end, and requirements for D-T operations. Problem characterization through Pareto diagrams provides insight into recurrent failure modes and the use of Duane plots for charting of reliability changes both cumulative and instantaneous, is explained and demonstrated
Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne
When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
The models that can be used to provide estimates of the reliability of nuclear power systems operate at many different levels of sophistication. The least-sophisticated models treat failure processes that entail only time-independent phenomena (such as demand failure). More advanced models treat processes that also include time-dependent phenomena such as run failure and possibly repair. However, many of these dynamic models are deficient in some respects because they either disregard the time-dependent phenomena that cannot be expressed in closed-form analytic terms or because they treat these phenomena in quasi-static terms. The next level of modeling requires a dynamic approach that incorporates not only procedures for treating all significant time-dependent phenomena but also procedures for treating these phenomena when they are conditionally linked or characterized by arbitrarily selected probability distributions. The level of sophistication that is required is provided by a dynamic, Monte Carlo modeling approach. A computer code that uses a dynamic, Monte Carlo modeling approach is Q-GERT (Graphical Evaluation and Review Technique - with Queueing), and the present study had demonstrated the feasibility of using Q-GERT for modeling time-dependent, unconditionally and conditionally linked phenomena that are characterized by arbitrarily selected probability distributions
Kellerer, Hans; Pisinger, David
Thirteen years have passed since the seminal book on knapsack problems by Martello and Toth appeared. On this occasion a former colleague exclaimed back in 1990: "How can you write 250 pages on the knapsack problem?" Indeed, the definition of the knapsack problem is easily understood even by a non-expert who will not suspect the presence of challenging research topics in this area at the first glance. However, in the last decade a large number of research publications contributed new results for the knapsack problem in all areas of interest such as exact algorithms, heuristics and approximation schemes. Moreover, the extension of the knapsack problem to higher dimensions both in the number of constraints and in the num ber of knapsacks, as well as the modification of the problem structure concerning the available item set and the objective function, leads to a number of interesting variations of practical relevance which were the subject of intensive research during the last few years. Hence, two years ago ...
Chan, Joel; Paletz, Susannah B F; Schunn, Christian D
Complex problem solving in naturalistic environments is fraught with uncertainty, which has significant impacts on problem-solving behavior. Thus, theories of human problem solving should include accounts of the cognitive strategies people bring to bear to deal with uncertainty during problem solving. In this article, we present evidence that analogy is one such strategy. Using statistical analyses of the temporal dynamics between analogy and expressed uncertainty in the naturalistic problem-solving conversations among scientists on the Mars Rover Mission, we show that spikes in expressed uncertainty reliably predict analogy use (Study 1) and that expressed uncertainty reduces to baseline levels following analogy use (Study 2). In addition, in Study 3, we show with qualitative analyses that this relationship between uncertainty and analogy is not due to miscommunication-related uncertainty but, rather, is primarily concentrated on substantive problem-solving issues. Finally, we discuss a hypothesis about how analogy might serve as an uncertainty reduction strategy in naturalistic complex problem solving.
Zhou, Yan; Fu, Liya; Zhang, Jun; Hui, Yongchang
To analyze the reliability of a complex system described by minimal paths, an empirical likelihood method is proposed to solve the reliability test problem when the subsystem distributions are unknown. Furthermore, we provide a reliability test statistic of the complex system and extract the limit distribution of the test statistic. Therefore, we can obtain the confidence interval for reliability and make statistical inferences. The simulation studies also demonstrate the theorem results.
Methodology for evaluation of NPP protection structure reliability by impact of aircraft fall down is considered. The methodology is base on the probabilistic analysis of all potential events. The problem is solved in three stages: determination of loads on structural units, calculation of local reliability of protection structures by assigned loads and estimation of the structure reliability. The methodology proposed may be applied at the NPP design stage and by determination of reliability of already available structures
Colmenar, J. Manuel; Risco-Martin, Jose L.; Atienza Alonso, David; Garnica, Oscar; Hidalgo, Jose I.; Lanchares, Juan
Technology scaling has offered advantages to embedded systems, such as increased performance, more available memory and reduced energy consumption. However, scaling also brings a number of problems like reliability degradation mechanisms. The intensive activity of devices and high operating temperatures are key factors for reliability degradation in latest technology nodes. Focusing on embedded systems, the memory is prone to suffer reliability problems due to the intensive use of dynamic mem...
Department of Homeland Security — ERC is an international regulatory authority that works to improve the reliability of the bulk power system in North America. NERC works with many different regional...
Morzinski, Jerome [Los Alamos National Laboratory; Anderson - Cook, Christine M [Los Alamos National Laboratory; Klamann, Richard M [Los Alamos National Laboratory
SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.
...] Electric Reliability Organization Proposal To Retire Requirements in Reliability Standards AGENCY: Federal... Reliability Standards identified by the North American Electric Reliability Corporation (NERC), the Commission-certified Electric Reliability Organization. FOR FURTHER INFORMATION CONTACT: Kevin Ryan (Legal Information...
Yang, Joon-Eon; Hwang, Mee-Jung; Sung, Tae-Yong; Jin, Youngho
Reliability allocation is an optimization process of minimizing the total plant costs subject to the overall plant safety goal constraints. Reliability allocation was applied to determine the reliability characteristics of reactor systems, subsystems, major components and plant procedures that are consistent with a set of top-level performance goals; the core melt frequency, acute fatalities and latent fatalities. Reliability allocation can be performed to improve the design, operation and safety of new and/or existing nuclear power plants. Reliability allocation is a kind of a difficult multi-objective optimization problem as well as a global optimization problem. The genetic algorithm, known as one of the most powerful tools for most optimization problems, is applied to the reliability allocation problem of a typical pressurized water reactor in this article. One of the main problems of reliability allocation is defining realistic objective functions. Hence, in order to optimize the reliability of the system, the cost for improving and/or degrading the reliability of the system should be included in the reliability allocation process. We used techniques derived from the value impact analysis to define the realistic objective function in this article
Gupta, G.K.; Sedding, H.G.; Culbert, I.M.
Reliable performance of rotating machines, especially generators and primary heat transport pump motors, is critical to the efficient operation on nuclear stations. A significant number of premature machine failures have been attributed to the stator insulation problems. Ontario Hydro has attempted to assure the long term reliability of the insulation system in critical rotating machines through proper specifications and quality assurance tests for new machines and periodic on-line and off-line diagnostic tests on machines in service. The experience gained over the last twenty years is presented in this paper. Functional specifications have been developed for the insulation system in critical rotating machines based on engineering considerations and our past experience. These specifications include insulation stress, insulation resistance and polarization index, partial discharge levels, dissipation factor and tip up, AC and DC hipot tests. Voltage endurance tests are specified for groundwall insulation system of full size production coils and bars. For machines with multi-turn coils, turn insulation strength for fast fronted surges in specified and verified through tests on all coils in the factory and on samples of finished coils in the laboratory. Periodic on-line and off-line diagnostic tests were performed to assess the condition of the stator insulation system in machines in service. Partial discharges are measured on-line using several techniques to detect any excessive degradation of the insulation system in critical machines. Novel sensors have been developed and installed in several machines to facilitate measurements of partial discharges on operating machines. Several off-line tests are performed either to confirm the problems indicated by the on-line test or to assess the insulation system in machines which cannot be easily tested on-line. Experience with these tests, including their capabilities and limitations, are presented. (author)
Lee, Sang Yong; Jung, Jae Hyun; Kim, Jae Ho; Kim, Sung Hun
The system used in plant, military equipment, satellite, etc. consists of many electronic parts as control module, which requires relatively high reliability than other commercial electronic products. Specially, Nuclear power plant related to the radiation safety requires high safety and reliability, so most parts apply to Military-Standard level. Reliability prediction method provides the rational basis of system designs and also provides the safety significance of system operations. Thus various reliability prediction tools have been developed in recent decades, among of them, the MI-HDBK-217 method has been widely used as a powerful tool for the prediction. In this work, It is explained that reliability analysis work for Digital Processor Module (DPM, control module of SMART) is performed by Parts Stress Method based on MIL-HDBK-217F NOTICE2. We are using the Relex 7.6 of Relex software corporation, because reliability analysis process requires enormous part libraries and data for failure rate calculation
Ma, Ke; Wang, Huai; Blaabjerg, Frede
of energy. New approaches for reliability assessment are being taken in the design phase of power electronics systems based on the physics-of-failure in components. In this approach, many new methods, such as multidisciplinary simulation tools, strength testing of components, translation of mission profiles......, and statistical analysis, are involved to enable better prediction and design of reliability for products. This article gives an overview of the new design flow in the reliability engineering of power electronics from the system-level point of view and discusses some of the emerging needs for the technology...
Gonzalez Hernando, J.; Sanchez Izquierdo, J.
Trend in modern NPPCI is toward a broad use of programmable elements. Some aspects concerning present status of programmable digital systems reliability are reported. Basic differences between software and hardware concept require a specific approach in all the reliability topics concerning software systems. The software reliability theory was initialy developed upon hardware models analogies. At present this approach is changing and specific models are being developed. The growing use of programmable systems necessitates emphasizing the importance of more adequate regulatory requirements to include this technology in NPPCI. (author)
Lalli, V. R.; Vargo, D. J.
An analysis is presented that indicates that the reliability and quality assurance methodology selected by NASA to minimize failures in aerospace equipment can be applied directly to biomedical devices to improve hospital equipment reliability. The Space Electric Rocket Test project is used as an example of NASA application of reliability and quality assurance (R&QA) methods. By analogy a comparison is made to show how these same methods can be used in the development of transducers, instrumentation, and complex systems for use in medicine.
Full Text Available Abstract Graphene exhibits exciting properties which make it an appealing candidate for use in electronic devices. Reliable processes for device fabrication are crucial prerequisites for this. We developed a large area of CVD synthesis and transfer of graphene films. With patterning of these graphene layers using standard photoresist masks, we are able to produce arrays of gated graphene devices with four point contacts. The etching and lift off process poses problems because of delamination and contamination due to polymer residues when using standard resists. We introduce a metal etch mask which minimises these problems. The high quality of graphene is shown by Raman and XPS spectroscopy as well as electrical measurements. The process is of high value for applications, as it improves the processability of graphene using high-throughput lithography and etching techniques.
Haas, P.M.; Swanson, P.J.; Connelly, E.M.
Operational Human Performance Reliability Assessment (OHPRA) is an approach for assessing human performance that is being developed in response to demands from modern process industries for practical and effective tools to assess and improve human performance, and therefore overall system performance and safety. The single most distinguishing feature of the approach is that is defines human performance in open-quotes operationalclose quotes terms. OHPRA is focused not on generation of human error probabilities, but on practical analysis of human performance to aid management in (1) identifying open-quotes fixableclose quotes problems and (2) providing input on the importance and nature of potential improvements. Development of the model in progress uses a unique approach for eliciting expert strategies for assessing performance. A PC-based model incorporating this expertise is planned. A preliminary version of the approach has already been used successfully to identify practical human performance problems in reactor and chemical process plant operations
Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A
The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.
Ehsani, A.; Ranjbar, A.M.; Jafari, A.; Fotuhi-Firuzabad, M.
In a deregulated electric power utility industry in which a competitive electricity market can influence system reliability, market risks cannot be ignored. This paper (1) proposes an analytical probabilistic model for reliability evaluation of competitive electricity markets and (2) develops a methodology for incorporating the market reliability problem into HLII reliability studies. A Markov state space diagram is employed to evaluate the market reliability. Since the market is a continuously operated system, the concept of absorbing states is applied to it in order to evaluate the reliability. The market states are identified by using market performance indices and the transition rates are calculated by using historical data. The key point in the proposed method is the concept that the reliability level of a restructured electric power system can be calculated using the availability of the composite power system (HLII) and the reliability of the electricity market. Two case studies are carried out over Roy Billinton Test System (RBTS) to illustrate interesting features of the proposed methodology
The need for an emergency diesel generator (EDG) reliability program has been established by 10 CFR Part 50, Section 50.63, Loss of All Alternating Current Power, which requires that licensees assess their station blackout coping and recovery capability. EDGs are the principal emergency ac power sources for avoiding a station blackout. Regulatory Guide 1.155, Station Blackout, identifies a need for (1) a nuclear unit EDG reliability level of at least 0.95, and (2) an EDG reliability program to monitor and maintain the required EDG reliability levels. NUMARC-8700, Guidelines and Technical Bases for NUMARC Initiatives Addressing Station Blackout at Light Water Reactors, also provides guidance on such needs. The resolution of GSI B-56, Diesel Reliability will be accomplished by issuing Regulatory Guide 1.9, Rev. 3, Selection, Design, Qualification, Testing, and Reliability of Diesel Generator Units Used as Onsite Electric Power Systems at Nuclear Plants. This revision will integrate into a single regulatory guide pertinent guidance previously addressed in R.G. 1.9, Rev. 2, R.G. 1.108, and Generic Letter 84-15. R.G. 1.9 has been expanded to define the principal elements of an EDG reliability program for monitoring and maintaining EDG reliability levels selected for SBO. In addition, alert levels and corrective actions have been defined to detect a deteriorating situation for all EDGs assigned to a particular nuclear unit, as well as an individual problem EDG
A short outline is given on the history of the problem relating to the repair of radiation injuries, specifically its molecular mechanisms. The most urgent problems which currently confront the researchers are noted. This is a further study on the role of DNA repair in post-radiation recovery, search for ways to activate and suppress DNA repair, investigations into the activity balance of various repair enzymes as well as the problem of errors in the structure of repairing DNA. An important role is attached to the investigations of DNA repair in solving a number of practical problems
Diament, M.J.; Takasugi, J.; Kangarloo, H.
The reliability of gray scale sonography for the screening of hydronephrosis is assessed in a retrospective clinical study of pediatric patients. The sensitivity was 89% and the specificity 95%. Discrepancies between the ultrasound and urographic diagnosis of mild hydronephrosis - which was usually not clinically significant - accounted for all of the errors. False positive studies can be reduced by scanning the kidneys with the bladder empty. False negative examinations may be due to the contrast induced diuresis of the urogram. (orig.)
Baronti, Marco; van der Putten, Robertus; Venturi, Irene
This book, intended as a practical working guide for students in Engineering, Mathematics, Physics, or any other field where rigorous calculus is needed, includes 450 exercises. Each chapter starts with a summary of the main definitions and results, which is followed by a selection of solved exercises accompanied by brief, illustrative comments. A selection of problems with indicated solutions rounds out each chapter. A final chapter explores problems that are not designed with a single issue in mind but instead call for the combination of a variety of techniques, rounding out the book’s coverage. Though the book’s primary focus is on functions of one real variable, basic ordinary differential equations (separation of variables, linear first order and constant coefficients ODEs) are also discussed. The material is taken from actual written tests that have been delivered at the Engineering School of the University of Genoa. Literally thousands of students have worked on these problems, ensuring their real-...
Kumar, C. Senthil; John Arul, A.; Pal Singh, Om; Suryaprakasa Rao, K.
This paper presents the results of reliability analysis of Shutdown System (SDS) of Indian Prototype Fast Breeder Reactor. Reliability analysis carried out using Fault Tree Analysis predicts a value of 3.5 x 10 -8 /de for failure of shutdown function in case of global faults and 4.4 x 10 -8 /de for local faults. Based on 20 de/y, the frequency of shutdown function failure is 0.7 x 10 -6 /ry, which meets the reliability target, set by the Indian Atomic Energy Regulatory Board. The reliability is limited by Common Cause Failure (CCF) of actuation part of SDS and to a lesser extent CCF of electronic components. The failure frequency of individual systems is -3 /ry, which also meets the safety criteria. Uncertainty analysis indicates a maximum error factor of 5 for the top event unavailability
Measures of differences in reliability of two systems are considered in the scale model, location-scale model, and a nonparametric model. In each model, estimates and confidence intervals are given and some of their properties discussed
Toft, Henrik Stensgaard; Sørensen, John Dalsgaard
In order to minimise the total expected life-cycle costs of a wind turbine it is important to estimate the reliability level for all components in the wind turbine. This paper deals with reliability analysis for the tower and blades of onshore wind turbines placed in a wind farm. The limit states...... consideres are in the ultimate limit state (ULS) extreme conditions in the standstill position and extreme conditions during operating. For wind turbines, where the magnitude of the loads is influenced by the control system, the ultimate limit state can occur in both cases. In the fatigue limit state (FLS......) the reliability level for a wind turbine placed in a wind farm is considered, and wake effects from neighbouring wind turbines is taken into account. An illustrative example with calculation of the reliability for mudline bending of the tower is considered. In the example the design is determined according...
CERN. Geneva. Audiovisual Unit; Gillies, James D
The Lectures on reliability issues at the LHC will be focused on five main Modules on five days. Module 1: Basic Elements in Reliability Engineering Some basic terms, definitions and methods, from components up to the system and the plant, common cause failures and human factor issues. Module 2: Interrelations of Reliability & Safety (R&S) Reliability and risk informed approach, living models, risk monitoring. Module 3: The ideal R&S Process for Large Scale Systems From R&S goals via the implementation into the system to the proof of the compliance. Module 4: Some Applications of R&S on LHC Master logic, anatomy of risk, cause - consequence diagram, decomposition and aggregation of the system. Module 5: Lessons learned from R&S Application in various Technologies Success stories, pitfalls, constrains in data and methods, limitations per se, experienced in aviation, space, process, nuclear, offshore and transport systems and plants. The Lectures will reflect in summary the compromise in...