WorldWideScience

Sample records for reliability pattern proposal

  1. [Physical activity patterns of school adolescents: Validity, reliability and percentiles proposal for their evaluation].

    Science.gov (United States)

    Cossío Bolaños, Marco; Méndez Cornejo, Jorge; Luarte Rocha, Cristian; Vargas Vitoria, Rodrigo; Canqui Flores, Bernabé; Gomez Campos, Rossana

    2017-02-01

    Regular physical activity (PA) during childhood and adolescence is important for the prevention of non-communicable diseases and their risk factors. To validate a questionnaire for measuring patterns of PA, verify the reliability, comparing the levels of PA aligned with chronological and biological age, and to develop percentile curves to assess PA levels depending on biological maturation. Descriptive cross-sectional study was performed on a sample non-probabilistic quota of 3,176 Chilean adolescents (1685 males and 1491 females), with a mean age range from 10.0 to 18.9 years. An analysis was performed on, weight, standing and sitting height. The biological age through the years of peak growth rate and chronological age in years was determined. Body Mass Index was calculated and a survey of PA was applied. The LMS method was used to develop percentiles. The values for the confirmatory analysis showed saturations between 0.517 and 0.653. The value of adequacy of Kaiser-Meyer-Olkin (KMO) was 0.879 and with 70.8% of the variance explained. The Cronbach alpha values ranged from 0.81 to 0.86. There were differences between the genders when aligned chronological age. There were no differences when aligned by biological age. Percentiles are proposed to classify the PA of adolescents of both genders according to biological age and sex. The questionnaire used was valid and reliable, plus the PA should be evaluated by biological age. These findings led to the development of percentiles to assess PA according to biological age and gender.

  2. Two-dimensional wavelet transform for reliability-guided phase unwrapping in optical fringe pattern analysis.

    Science.gov (United States)

    Li, Sikun; Wang, Xiangzhao; Su, Xianyu; Tang, Feng

    2012-04-20

    This paper theoretically discusses modulus of two-dimensional (2D) wavelet transform (WT) coefficients, calculated by using two frequently used 2D daughter wavelet definitions, in an optical fringe pattern analysis. The discussion shows that neither is good enough to represent the reliability of the phase data. The differences between the two frequently used 2D daughter wavelet definitions in the performance of 2D WT also are discussed. We propose a new 2D daughter wavelet definition for reliability-guided phase unwrapping of optical fringe pattern. The modulus of the advanced 2D WT coefficients, obtained by using a daughter wavelet under this new daughter wavelet definition, includes not only modulation information but also local frequency information of the deformed fringe pattern. Therefore, it can be treated as a good parameter that represents the reliability of the retrieved phase data. Computer simulation and experimentation show the validity of the proposed method.

  3. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  4. Proposed Reliability/Cost Model

    Science.gov (United States)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  5. 78 FR 41339 - Electric Reliability Organization Proposal To Retire Requirements in Reliability Standards

    Science.gov (United States)

    2013-07-10

    ...] Electric Reliability Organization Proposal To Retire Requirements in Reliability Standards AGENCY: Federal... Reliability Standards identified by the North American Electric Reliability Corporation (NERC), the Commission-certified Electric Reliability Organization. FOR FURTHER INFORMATION CONTACT: Kevin Ryan (Legal Information...

  6. INTER-RATER RELIABILITY FOR MOVEMENT PATTERN ANALYSIS (MPA: MEASURING PATTERNING OF BEHAVIORS VERSUS DISCRETE BEHAVIOR COUNTS AS INDICATORS OF DECISION-MAKING STYLE

    Directory of Open Access Journals (Sweden)

    Brenda L Connors

    2014-06-01

    Full Text Available The unique yield of collecting observational data on human movement has received increasing attention in a number of domains, including the study of decision-making style. As such, interest has grown in the nuances of core methodological issues, including the best ways of assessing inter-rater reliability. In this paper we focus on one key topic – the distinction between establishing reliability for the patterning of behaviors as opposed to the computation of raw counts – and suggest that reliability for each be compared empirically rather than determined a priori. We illustrate by assessing inter-rater reliability for key outcome measures derived from Movement Pattern Analysis (MPA, an observational methodology that records body movements as indicators of decision-making style with demonstrated predictive validity. While reliability ranged from moderate to good for raw counts of behaviors reflecting each of two Overall Factors generated within MPA (Assertion and Perspective, inter-rater reliability for patterning (proportional indicators of each factor was significantly higher and excellent (ICC = .89. Furthermore, patterning, as compared to raw counts, provided better prediction of observable decision-making process assessed in the laboratory. These analyses support the utility of using an empirical approach to inform the consideration of measuring discrete behavioral counts versus patterning of behaviors when determining inter-rater reliability of observable behavior. They also speak to the substantial reliability that may be achieved via application of theoretically grounded observational systems such as MPA that reveal thinking and action motivations via visible movement patterns.

  7. Pattern Recognition for Reliability Assessment of Water Distribution Networks

    NARCIS (Netherlands)

    Trifunovi?, N.

    2012-01-01

    The study presented in this manuscript investigates the patterns that describe reliability of water distribution networks focusing to the node connectivity, energy balance, and economics of construction, operation and maintenance. A number of measures to evaluate the network resilience has been

  8. Effects of Analytical and Holistic Scoring Patterns on Scorer Reliability in Biology Essay Tests

    Science.gov (United States)

    Ebuoh, Casmir N.

    2018-01-01

    Literature revealed that the patterns/methods of scoring essay tests had been criticized for not being reliable and this unreliability is more likely to be more in internal examinations than in the external examinations. The purpose of this study is to find out the effects of analytical and holistic scoring patterns on scorer reliability in…

  9. Pattern of alveolar bone loss and reliability of measurements with the radiographic technique

    International Nuclear Information System (INIS)

    Rise, J.; Albandar, J.M.

    1988-01-01

    The purposes of this paper were to study the pattern of bone loss among different teeth at the individual level and to study the effect of using different aggregated units of analysis on measurement error. Bone loss was assessed in standardized periapical radiographs from 293 subjects (18-68 years), and the mean bone loss score for each tooth type was calculated. These were then correlated by means of factor analysis to study the bone loss pattern. Reliability (measurement error) was studied by the internal consistency and the test-retest methods. The pattern of bone loss showed a unidimensional pattern, indicating that any tooth will work equally well as a dependent variable for epidemiologic descriptive purposes. However, a more thorough analysis also showed a multidimensional pattern in terms of four dimensions, which correspond to four tooth groups: incisors, upper premolars, lower premolars and molars. The four dimensions accounted for 80% of the toal variance. The multidimensional pattern may be important for the modeling of bone loss; thus different models may explain the four dimension (indices) used as dependent variables. The reliability (internal consistency) of the four indices was satisfactory. By the test-retest method, reliability was higher when the more aggregated unit (the individual) was used

  10. Test-retest reliability of the proposed DSM-5 eating disorder diagnostic criteria

    Science.gov (United States)

    Sysko, Robyn; Roberto, Christina A.; Barnes, Rachel D.; Grilo, Carlos M.; Attia, Evelyn; Walsh, B. Timothy

    2012-01-01

    The proposed DSM-5 classification scheme for eating disorders includes both major and minor changes to the existing DSM-IV diagnostic criteria. It is not known what effect these modifications will have on the ability to make reliable diagnoses. Two studies were conducted to evaluate the short-term test-retest reliability of the proposed DSM-5 eating disorder diagnoses: anorexia nervosa, bulimia nervosa, binge eating disorder, and feeding and eating conditions not elsewhere classified. Participants completed two independent telephone interviews with research assessors (n=70 Study 1; n=55 Study 2). Fair to substantial agreements (κ= 0.80 and 0.54) were observed across eating disorder diagnoses in Study 1 and Study 2, respectively. Acceptable rates of agreement were identified for the individual eating disorder diagnoses, including DSM-5 anorexia nervosa (κ’s of 0.81 to 0.97), bulimia nervosa (κ=0.84), binge eating disorder (κ’s of 0.75 and 0.61), and feeding and eating disorders not elsewhere classified (κ’s of 0.70 and 0.46). Further, improved short-term test-retest reliability was noted when using the DSM-5, in comparison to DSM-IV, criteria for binge eating disorder. Thus, these studies found that trained interviewers can reliably diagnose eating disorders using the proposed DSM-5 criteria; however, additional data from general practice settings and community samples are needed. PMID:22401974

  11. Investigating univariate temporal patterns for intrinsic connectivity networks based on complexity and low-frequency oscillation: a test-retest reliability study.

    Science.gov (United States)

    Wang, X; Jiao, Y; Tang, T; Wang, H; Lu, Z

    2013-12-19

    Intrinsic connectivity networks (ICNs) are composed of spatial components and time courses. The spatial components of ICNs were discovered with moderate-to-high reliability. So far as we know, few studies focused on the reliability of the temporal patterns for ICNs based their individual time courses. The goals of this study were twofold: to investigate the test-retest reliability of temporal patterns for ICNs, and to analyze these informative univariate metrics. Additionally, a correlation analysis was performed to enhance interpretability. Our study included three datasets: (a) short- and long-term scans, (b) multi-band echo-planar imaging (mEPI), and (c) eyes open or closed. Using dual regression, we obtained the time courses of ICNs for each subject. To produce temporal patterns for ICNs, we applied two categories of univariate metrics: network-wise complexity and network-wise low-frequency oscillation. Furthermore, we validated the test-retest reliability for each metric. The network-wise temporal patterns for most ICNs (especially for default mode network, DMN) exhibited moderate-to-high reliability and reproducibility under different scan conditions. Network-wise complexity for DMN exhibited fair reliability (ICC<0.5) based on eyes-closed sessions. Specially, our results supported that mEPI could be a useful method with high reliability and reproducibility. In addition, these temporal patterns were with physiological meanings, and certain temporal patterns were correlated to the node strength of the corresponding ICN. Overall, network-wise temporal patterns of ICNs were reliable and informative and could be complementary to spatial patterns of ICNs for further study. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Inter- and intra-examiner reliability of footprint pattern analysis obtained from diabetics using the Harris mat.

    Science.gov (United States)

    Cisneros, Lígia de Loiola; Fonseca, Tiago H S; Abreu, Vivianni C

    2010-01-01

    High plantar pressure is a proven risk factor for ulceration among individuals with diabetes mellitus. The Harris and Beath footprinting mat is one of the tools used in screening for foot ulceration risk among these subjects. There are no reports in the literature on the reliability of footprint analysis using print pattern criteria. The aim of this study was to evaluate the inter- and intra-examiner reliability of the analysis of footprint patterns obtained using the Harris and Beath footprinting mat. Footprints were taken from 41 subjects using the footprinting mat. The images were subjected to analysis by three independent examiners. To investigate the intra-examiner reliability, the analysis was repeated by one of the examiners one week later. The weighted kappa coefficient was excellent (K(w) > 0.80) for the inter- and intra-examiner analyses for most of the points studied on both feet. The criteria for analyzing footprint patterns obtained using the Harris and Beath footprinting mat presented good reliability and high to excellent inter- and intra-examiner agreement. This method is reliable for analyses involving one or more examiners. Article registered in the Australian New Zealand Clinical Trials Registry (ANZCTR) under the number ACTRN12609000693224.

  13. Revised scoring and improved reliability for the Communication Patterns Questionnaire.

    Science.gov (United States)

    Crenshaw, Alexander O; Christensen, Andrew; Baucom, Donald H; Epstein, Norman B; Baucom, Brian R W

    2017-07-01

    The Communication Patterns Questionnaire (CPQ; Christensen, 1987) is a widely used self-report measure of couple communication behavior and is well validated for assessing the demand/withdraw interaction pattern, which is a robust predictor of poor relationship and individual outcomes (Schrodt, Witt, & Shimkowski, 2014). However, no studies have examined the CPQ's factor structure using analytic techniques sufficient by modern standards, nor have any studies replicated the factor structure using additional samples. Further, the current scoring system uses fewer than half of the total items for its 4 subscales, despite the existence of unused items that have content conceptually consistent with those subscales. These characteristics of the CPQ have likely contributed to findings that subscale scores are often troubled by suboptimal psychometric properties such as low internal reliability (e.g., Christensen, Eldridge, Catta-Preta, Lim, & Santagata, 2006). The present study uses exploratory and confirmatory factor analyses on 4 samples to reexamine the factor structure of the CPQ to improve scale score reliability and to determine if including more items in the subscales is warranted. Results indicate that a 3-factor solution (constructive communication and 2 demand/withdraw scales) provides the best fit for the data. That factor structure was confirmed in the replication samples. Compared with the original scales, the revised scales include additional items that expand the conceptual range of the constructs, substantially improve reliability of scale scores, and demonstrate stronger associations with relationship satisfaction and sensitivity to change in therapy. Implications for research and treatment are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Reliability and validity of the Korean standard pattern identification for stroke (K-SPI-Stroke questionnaire

    Directory of Open Access Journals (Sweden)

    Kang Byoung-Kab

    2012-04-01

    Full Text Available Abstract Background The present study was conducted to examine the reliability and validity of the ‘Korean Standard Pattern Identification for Stroke (K-SPI-Stroke’, which was developed and evaluated within the context of traditional Korean medicine (TKM. Methods Between September 2006 and December 2010, 2,905 patients from 11 Korean medical hospitals were asked to complete the K-SPI-Stroke questionnaire as a part of project ' Fundamental study for the standardization and objectification of pattern identification in traditional Korean medicine for stroke (SOPI-Stroke. Each patient was independently diagnosed by two TKM physicians from the same site according to one of four patterns, as suggested by the Korea Institute of Oriental Medicine: 1 a Qi deficiency pattern, 2 a Dampness-phlegm pattern, 3 a Yin deficiency pattern, or 4 a Fire-heat pattern. We estimated the internal consistency using Cronbach’s α coefficient, the discriminant validity using the means score of patterns, and the predictive validity using the classification accuracy of the K-SPI-Stroke questionnaire. Results The K-SPI-Stroke questionnaire had satisfactory internal consistency (α = 0.700 and validity, with significant differences in the mean of scores among the four patterns. The overall classification accuracy of this questionnaire was 65.2 %. Conclusion These results suggest that the K-SPI-Stroke questionnaire is a reliable and valid instrument for estimating the severity of the four patterns.

  15. PROPOSAL OF A TABLE TO CLASSIFY THE RELIABILITY OF BASELINES OBTAINED BY GNSS TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Paulo Cesar Lima Segantine

    Full Text Available The correct data processing of GNSS measurements, as well as a correct interpretation of the results are fundamental factors for analysis of quality of land surveying works. In that sense, it is important to keep in mind that, although, the statistical data provided by the majority of commercials software used for GNSS data processing, describes the credibility of the work, they do not have consistent information about the reliability of the processed coordinates. Based on that assumption, this paper proposes a classification table to classify the reliability of baselines obtained through GNSS data processing. As data input, the GNSS measurements were performed during the years 2006 and 2008, considering different seasons of the year, geometric configurations of RBMC stations and baseline lengths. As demonstrated in this paper, parameters as baseline length, ambiguity solution, PDOP value and the precision of horizontal and vertical values of coordinates can be used as reliability parameters. The proposed classification guarantees the requirements of the Brazilian Law N( 10.267/2001 of the National Institute of Colonization and Agrarian Reform (INCRA

  16. 78 FR 38851 - Electric Reliability Organization Proposal To Retire Requirements in Reliability Standards

    Science.gov (United States)

    2013-06-28

    ... either: Provide little protection for Bulk-Power System reliability or are redundant with other aspects... for retirement either: (1) Provide little protection for Bulk-Power System reliability or (2) are... to assure reliability of the Bulk-Power System and should be withdrawn. We have identified 41...

  17. Dependent systems reliability estimation by structural reliability approach

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2014-01-01

    Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...... identification. Application of the proposed method can be found in many real world systems....

  18. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  19. Reliability of patterns of hippocampal sclerosis as predictors of postsurgical outcome.

    Science.gov (United States)

    Thom, Maria; Liagkouras, Ioannis; Elliot, Kathryn J; Martinian, Lillian; Harkness, William; McEvoy, Andrew; Caboclo, Luis O; Sisodiya, Sanjay M

    2010-09-01

    Around one-third of patients undergoing temporal lobe surgery for the treatment of intractable temporal lobe epilepsy with hippocampal sclerosis (HS) fail to become seizure-free. Identifying reliable predictors of poor surgical outcome would be helpful in management. Atypical patterns of HS may be associated with poorer outcomes. Our aim was to identify atypical HS cases from a large surgical series and to correlate pathology with clinical and outcome data. Quantitative neuropathologic evaluation on 165 hippocampal surgical specimens and 21 control hippocampi was carried out on NeuN-stained sections. Neuronal densities (NDs) were measured in CA4, CA3, CA2, and CA1 subfields. The severity of granule cell dispersion (GCD) was assessed. Comparison with control ND values identified the following patterns based on the severity and distribution of neuronal loss: classical HS (CHS; n = 60) and total HS (THS; n = 39). Atypical patterns were present in 30% of cases, including end-folium sclerosis (EFS; n = 5), CA1 predominant pattern (CA1p; n = 9), and indeterminate HS (IHS, n = 35). No HS was noted in 17 cases. Poorest outcomes were noted for no-HS, and CA1p groups with 33-44% International League Against Epilepsy (ILAE) class I at up to 2 years follow-up compared to 69% for CHS (p < 0.05). GCD associated with HS type (p < 0.01), but not with outcome. These findings support the identification and delineation of atypical patterns of HS using quantitative methods. Atypical patterns may represent distinct clinicopathologic subtypes and may have predictive value following epilepsy surgery. Wiley Periodicals, Inc. © 2010 International League Against Epilepsy.

  20. A Model of Bus Bunching under Reliability-based Passenger Arrival Patterns

    OpenAIRE

    Fonzone, Achille; Schmöcker, Jan-Dirk; Liu, Ronghui

    2015-01-01

    If bus service departure times are not completely unknown to the passengers, non-uniform passenger arrival patterns can be expected. We propose that passengers decide their arrival time at stops based on a continuous logit model that considers the risk of missing services. Expected passenger waiting times are derived in a bus system that allows also for overtaking between bus services. We then propose an algorithm to derive the dwell time of subsequent buses serving a stop in order to illustr...

  1. Interrater reliability of the Saint-Anne Dargassies Scale in assessing the neurological patterns of healthy preterm newborns

    Directory of Open Access Journals (Sweden)

    Carla Ismirna Santos Alves

    Full Text Available Abstract Objectives: to assess the interrater reliability of the Saint-Anne Dargassies Scale in assessing neurological patterns of healthy preterm newborns. Methods: twenty preterm newborns met the inclusion criteria for participation in this prospective study. The neurologic examination was performed using the Saint-Anne Dargassies Scale, showing normal serial cranial ultrasound examination. In order to test the reliability, the study was structured as follows: group I (rater 1/physiotherapist; rater 2/neonatologist; group II (rater 3/physiotherapist; rater 4/child neurologist and the gold standard (expert and professor in pediatric neurology. Results: high interrater agreement was observed between groups I - II compared with the gold standard in assessing postural pattern (p<0.01. Regarding the assessment ofprimitive reflexes, greater agreement was observed in the evaluation of palmar grasp reflex and Moro reflex (p< 0.01 for group I compared with the gold standard. An analysis of tone demonstrated heterogeneous agreement, without compromising the reliability of the scale. The probability of equality between measurements of head circumference in the two groups, compared with the gold standard, was observed. Conclusions: the Saint-Anne Dargassies Scale demonstrated high reliability and homogeneity with significant power of reproducibility and may be capable to identify preterm newborns suspected of having neurological deficits.

  2. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  3. Reliability of digital ulcer definitions as proposed by the UK Scleroderma Study Group: A challenge for clinical trial design.

    Science.gov (United States)

    Hughes, Michael; Tracey, Andrew; Bhushan, Monica; Chakravarty, Kuntal; Denton, Christopher P; Dubey, Shirish; Guiducci, Serena; Muir, Lindsay; Ong, Voon; Parker, Louise; Pauling, John D; Prabu, Athiveeraramapandian; Rogers, Christine; Roberts, Christopher; Herrick, Ariane L

    2018-06-01

    The reliability of clinician grading of systemic sclerosis-related digital ulcers has been reported to be poor to moderate at best, which has important implications for clinical trial design. The aim of this study was to examine the reliability of new proposed UK Scleroderma Study Group digital ulcer definitions among UK clinicians with an interest in systemic sclerosis. Raters graded (through a custom-built interface) 90 images (80 unique and 10 repeat) of a range of digital lesions collected from patients with systemic sclerosis. Lesions were graded on an ordinal scale of severity: 'no ulcer', 'healed ulcer' or 'digital ulcer'. A total of 23 clinicians - 18 rheumatologists, 3 dermatologists, 1 hand surgeon and 1 specialist rheumatology nurse - completed the study. A total of 2070 (1840 unique + 230 repeat) image gradings were obtained. For intra-rater reliability, across all images, the overall weighted kappa coefficient was high (0.71) and was moderate (0.55) when averaged across individual raters. Overall inter-rater reliability was poor (0.15). Although our proposed digital ulcer definitions had high intra-rater reliability, the overall inter-rater reliability was poor. Our study highlights the challenges of digital ulcer assessment by clinicians with an interest in systemic sclerosis and provides a number of useful insights for future clinical trial design. Further research is warranted to improve the reliability of digital ulcer definition/rating as an outcome measure in clinical trials, including examining the role for objective measurement techniques, and the development of digital ulcer patient-reported outcome measures.

  4. Classifier Fusion With Contextual Reliability Evaluation.

    Science.gov (United States)

    Liu, Zhunga; Pan, Quan; Dezert, Jean; Han, Jun-Wei; He, You

    2018-05-01

    Classifier fusion is an efficient strategy to improve the classification performance for the complex pattern recognition problem. In practice, the multiple classifiers to combine can have different reliabilities and the proper reliability evaluation plays an important role in the fusion process for getting the best classification performance. We propose a new method for classifier fusion with contextual reliability evaluation (CF-CRE) based on inner reliability and relative reliability concepts. The inner reliability, represented by a matrix, characterizes the probability of the object belonging to one class when it is classified to another class. The elements of this matrix are estimated from the -nearest neighbors of the object. A cautious discounting rule is developed under belief functions framework to revise the classification result according to the inner reliability. The relative reliability is evaluated based on a new incompatibility measure which allows to reduce the level of conflict between the classifiers by applying the classical evidence discounting rule to each classifier before their combination. The inner reliability and relative reliability capture different aspects of the classification reliability. The discounted classification results are combined with Dempster-Shafer's rule for the final class decision making support. The performance of CF-CRE have been evaluated and compared with those of main classical fusion methods using real data sets. The experimental results show that CF-CRE can produce substantially higher accuracy than other fusion methods in general. Moreover, CF-CRE is robust to the changes of the number of nearest neighbors chosen for estimating the reliability matrix, which is appealing for the applications.

  5. Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were proposed

    DEFF Research Database (Denmark)

    Kottner, Jan; Audigé, Laurent; Brorson, Stig

    2011-01-01

    Results of reliability and agreement studies are intended to provide information about the amount of error inherent in any diagnosis, score, or measurement. The level of reliability and agreement among users of scales, instruments, or classifications is widely unknown. Therefore, there is a need ......, standards, or guidelines for reporting reliability and agreement in the health care and medical field are lacking. The objective was to develop guidelines for reporting reliability and agreement studies....

  6. Evaluating Proposed Investments in Power System Reliability and Resilience: Preliminary Results from Interviews with Public Utility Commission Staff

    Energy Technology Data Exchange (ETDEWEB)

    LaCommare, Kristina [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Larsen, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Eto, Joseph [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-01-01

    Policymakers and regulatory agencies are expressing renewed interest in the reliability and resilience of the U.S. electric power system in large part due to growing recognition of the challenges posed by climate change, extreme weather events, and other emerging threats. Unfortunately, there has been little or no consolidated information in the public domain describing how public utility/service commission (PUC) staff evaluate the economics of proposed investments in the resilience of the power system. Having more consolidated information would give policymakers a better understanding of how different state regulatory entities across the U.S. make economic decisions pertaining to reliability/resiliency. To help address this, Lawrence Berkeley National Laboratory (LBNL) was tasked by the U.S. Department of Energy Office of Energy Policy and Systems Analysis (EPSA) to conduct an initial set of interviews with PUC staff to learn more about how proposed utility investments in reliability/resilience are being evaluated from an economics perspective. LBNL conducted structured interviews in late May-early June 2016 with staff from the following PUCs: Washington D.C. (DCPSC), Florida (FPSC), and California (CPUC).

  7. [A Screening-Tool for Three Dimensions of Work-Related Behavior and Experience Patterns in the Psychosomatic Rehabilitation - A Proposal for a Short-Form of the Occupational Stress and Coping Inventory (AVEM-3D)].

    Science.gov (United States)

    Beierlein, V; Köllner, V; Neu, R; Schulz, H

    2016-12-01

    Objectives: The assessment of work pressures is of particular importance in psychosomatic rehabilitation. An established questionnaire is the Occupational Stress and Coping Inventory (German abbr. AVEM), but it is quite long and with regard to scoring time-consuming in routine clinical care. It should therefore be tested, whether a shortened version of the AVEM can be developed, which is able to assess the formerly described three second-order factors of the AVEM, namely Working Commitment, Resilience, and Emotions, sufficiently reliable and valid, and which also may be used for screening of patients with prominent work-related behavior and experience patterns. Methods: Data were collected at admission from consecutive samples of three hospitals of psychosomatic rehabilitation ( N  = 10,635 patients). The sample was randomly divided in two subsamples (design and validation sample). Using exploratory principal component analyses in the design sample, items with the highest factor loadings for the three new scales were selected and evaluated psychometrically using the validation sample. Possible Cut-off values ought to be derived from distribution patterns of scores in the scales. Relationships with sociodemographic, occupational and diagnosis-related characteristics, as well as with patterns of work-related experiences and behaviors are examined. Results: The three performed principal component analyses explained in the design sample on the respective first factor between 31 % and 34 % of the variance. The selected 20 items were assigned to the 3-factor structure in the validation sample as expected. The three new scales are sufficiently reliable with values of Cronbach's α between 0,84 and 0,88. The naming of the three new scales is based on the names of the secondary factors. Cut-off values for the identification of distinctive patient-reported data are proposed. Conclusion: Main advantages of the proposed shortened version AVEM-3D are that with a

  8. Pattern description and reliability parameters of six force-time related indices measured with plantar pressure measurements.

    Science.gov (United States)

    Deschamps, Kevin; Roosen, Philip; Bruyninckx, Herman; Desloovere, Kaat; Deleu, Paul-Andre; Matricali, Giovanni A; Peeraer, Louis; Staes, Filip

    2013-09-01

    Functional interpretation of plantar pressure measurements is commonly done through the use of ratios and indices which are preceded by the strategic combination of a subsampling method and selection of physical quantities. However, errors which may arise throughout the determination of these temporal indices/ratio calculations (T-IRC) have not been quantified. The purpose of the current study was therefore to estimate the reliability of T-IRC following semi-automatic total mapping (SATM). Using a repeated-measures design, two experienced therapists performed three subsampling sessions on three left and right pedobarographic footprints of ten healthy participants. Following the subsampling, six T-IRC were calculated: Rearfoot-Forefoot_fti, Rearfoot-Midfoot_fti, Forefoot medial/lateral_fti, First ray_fti, Metatarsal 1-Metatarsal 5_fti, Foot medial-lateral_fti. Patterns of the T-IRC were found to be consistent and in good agreement with corresponding knowledge from the literature. The inter-session errors of both therapists were similar in pattern and magnitude. The lowest peak inter-therapist error was found in the First ray_fti (6.5 a.u.) whereas the highest peak inter-therapist error was observed in the Forefoot medial/lateral_fti (27.0 a.u.) The magnitude of the inter-session and inter-therapist error varied over time, precluding the calculation of a simple numerical value for the error. The difference between both error parameters of all T-IRC was negligible which underscores the repeatability of the SATM protocol. The current study reports consistent patterns for six T-IRC and similar inter-session and inter-therapist error. The proposed SATM protocol and the T-IRC may therefore serve as basis for functional interpretation of footprint data. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Reliability assessment of competing risks with generalized mixed shock models

    International Nuclear Information System (INIS)

    Rafiee, Koosha; Feng, Qianmei; Coit, David W.

    2017-01-01

    This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.

  10. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  11. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  12. Proposal for the development of 3D Vertically Integrated Pattern Recognition Associative Memory (VIPRAM)

    Energy Technology Data Exchange (ETDEWEB)

    Deptuch, Gregory; Hoff, Jim; Kwan, Simon; Lipton, Ron; Liu, Ted; Ramberg, Erik; Todri, Aida; Yarema, Ray; /Fermilab; Demarteua, Marcel,; Drake, Gary; Weerts, Harry; /Argonne /Chicago U. /Padua U. /INFN, Padua

    2010-10-01

    Future particle physics experiments looking for rare processes will have no choice but to address the demanding challenges of fast pattern recognition in triggering as detector hit density becomes significantly higher due to the high luminosity required to produce the rare process. The authors propose to develop a 3D Vertically Integrated Pattern Recognition Associative Memory (VIPRAM) chip for HEP applications, to advance the state-of-the-art for pattern recognition and track reconstruction for fast triggering.

  13. Improvement of mechanical reliability by patterned silver/Indium-Tin-Oxide structure for flexible electronic devices

    International Nuclear Information System (INIS)

    Baek, Kyunghyun; Jang, Kyungsoo; Lee, Youn-Jung; Ryu, Kyungyul; Choi, Woojin; Kim, Doyoung; Yi, Junsin

    2013-01-01

    We report the effect of silver (Ag)-buffer layer Indium-Tin-Oxide (ITO) film on a polyethylene terephthalate substrate on the electrical, optical and reliable properties for transparent–flexible displays. The electrical and optical characteristics of an ITO-only film and an Ag-layer-inserted ITO film are measured and compared to assess the applicability of the triple layered structure in flexible displays. The sheet resistance, the resistivity and the light transmittance of the ITO-only film were 400 Ω/sq, 1.33 × 10 −3 Ω-cm and 99.2%, while those of the ITO film inserted with a 10 nm thick Ag layer were 165 Ω/sq, 4.78 × 10 −4 Ω-cm and about 97%, respectively. To evaluate the mechanical reliability of the different ITO films, bending tests were carried out. After the dynamic bending test of 900 cycles, the sheet resistance of the ITO film inserted with the Ag layer changed from 154 Ω/sq to 475 Ω/sq, about a 3-time increase but that of the ITO-only film changed from 400 Ω/sq to 61,986 Ω/sq, about 150-time increase. When the radius is changed from 25 mm to 20 mm in the static bending test, the sheet resistance of the ITO-only film changed from 400 to 678.3 linearly whereas that of the Ag-layer inserted ITO film changed a little from 154.4 to 154.9. These results show that Ag-layer inserted ITO film had better mechanical characteristics than the ITO-only film. - Highlights: ► Transparent flexible electrode fabricated on glass substrate. ► Electrode fabricated using vertically-patterned design on glass substrate. ► Optimization of the vertical patterns ► Application of the vertically-patterned electrode in transparent–flexible electronics

  14. Reliability of neural encoding

    DEFF Research Database (Denmark)

    Alstrøm, Preben; Beierholm, Ulrik; Nielsen, Carsten Dahl

    2002-01-01

    The reliability with which a neuron is able to create the same firing pattern when presented with the same stimulus is of critical importance to the understanding of neuronal information processing. We show that reliability is closely related to the process of phaselocking. Experimental results f...

  15. Assessment of printability for printed electronics patterns by measuring geometric dimensions and defining assessment parameters

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Sung Woong [Dept. of Robotics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu (Korea, Republic of); Kim, Cheol; Kim, Chung Hwan [Chungnam National University, Daejeon (Korea, Republic of)

    2016-12-15

    The printability of patterns for printed electronic devices determines the performance, yield rate, and reliability of the devices; therefore, it should be assessed quantitatively. In this paper, parameters for printability assessment of printed patterns for width, pinholes, and edge waviness are suggested. For quantitative printability assessment, printability grades for each parameter are proposed according to the parameter values. As examples of printability assessment, printed line patterns and mesh patterns obtained using roll-to-roll gravure printing are used. Both single-line patterns and mesh patterns show different levels of printability, even in samples obtained using the same printing equipment and conditions. Therefore, for reliable assessment, it is necessary to assess the printability of the patterns by enlarging the sampling area and increasing the number of samples. We can predict the performance of printed electronic devices by assessing the printability of the patterns that constitute them.

  16. Assessment of printability for printed electronics patterns by measuring geometric dimensions and defining assessment parameters

    International Nuclear Information System (INIS)

    Jeon, Sung Woong; Kim, Cheol; Kim, Chung Hwan

    2016-01-01

    The printability of patterns for printed electronic devices determines the performance, yield rate, and reliability of the devices; therefore, it should be assessed quantitatively. In this paper, parameters for printability assessment of printed patterns for width, pinholes, and edge waviness are suggested. For quantitative printability assessment, printability grades for each parameter are proposed according to the parameter values. As examples of printability assessment, printed line patterns and mesh patterns obtained using roll-to-roll gravure printing are used. Both single-line patterns and mesh patterns show different levels of printability, even in samples obtained using the same printing equipment and conditions. Therefore, for reliable assessment, it is necessary to assess the printability of the patterns by enlarging the sampling area and increasing the number of samples. We can predict the performance of printed electronic devices by assessing the printability of the patterns that constitute them

  17. Implementation of PATREC nuclear reliability program in LISP

    International Nuclear Information System (INIS)

    Patterson-Hine, F.A.; Koen, B.V.

    1985-01-01

    The reliability of large systems can be represented by reliability fault trees that contain the failure probabilities for the individual elements in the original network and the logical connectives that describe the interdependence of those probabilities. The PATREC 1 computer code was written to demonstrate the feasibility of using list processing techniques for the resolution of a reliability fault tree by pattern recognition. PATREC 1 was written in PL/1 and is used widely in France. The fault tree is expressed as a linked data structure, oriented, mapped into an end-ordered traverse, and used to retrieve known patterns stored as a linked-tree library. The basic idea of pattern recognition is to prune the fault tree by identifying known patterns, retrieving the corresponding mathematical equation, and evaluating the replacement leaves. This process is repeated until the original tree is reduced to a single leaf - the system reliability

  18. The reliability and validity of subjective notational analysis in comparison to global positioning system tracking to assess athlete movement patterns.

    Science.gov (United States)

    Doğramac, Sera N; Watsford, Mark L; Murphy, Aron J

    2011-03-01

    Subjective notational analysis can be used to track players and analyse movement patterns during match-play of team sports such as futsal. The purpose of this study was to establish the validity and reliability of the Event Recorder for subjective notational analysis. A course was designed, replicating ten minutes of futsal match-play movement patterns, where ten participants undertook the course. The course allowed a comparison of data derived from subjective notational analysis, to the known distances of the course, and to GPS data. The study analysed six locomotor activity categories, focusing on total distance covered, total duration of activities and total frequency of activities. The values between the known measurements and the Event Recorder were similar, whereas the majority of significant differences were found between the Event Recorder and GPS values. The reliability of subjective notational analysis was established with all ten participants being analysed on two occasions, as well as analysing five random futsal players twice during match-play. Subjective notational analysis is a valid and reliable method of tracking player movements, and may be a preferred and more effective method than GPS, particularly for indoor sports such as futsal, and field sports where short distances and changes in direction are observed.

  19. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

    International Nuclear Information System (INIS)

    Ding, Yi; Wang, Peng; Goel, Lalit; Billinton, Roy; Karki, Rajesh

    2007-01-01

    This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

  20. Design of etch holes to compensate spring width loss for reliable resonant frequencies

    International Nuclear Information System (INIS)

    Jang, Yun-Ho; Kim, Jong-Wan; Kim, Yong-Kweon; Kim, Jung-Mu

    2012-01-01

    A pattern width loss during the fabrication of lateral silicon resonators degrades resonant frequency reliability since such a width loss causes the significant deviation of spring stiffness. Here we present a design guide for etch holes to obtain reliable resonant frequencies by controlling etch holes geometries. The new function of an etch hole is to generate the comparable amount of the width loss between springs and etch holes, in turn to minimize the effect of the spring width loss on resonant frequency shift and deviation. An analytic expression reveals that a compensation factor (CF), defined by the circumference (C u ) of a unit etch hole divided by its silicon area (A u ), is a key parameter for reliable frequencies. The protrusive etch holes were proposed and compared with square etch holes to demonstrate the frequency reliability according to CF values and etch hole shapes. The normalized resonant frequency shift and deviation of the protrusive etch hole (−13.0% ± 6.9%) were significantly improved compared to those of a square etch hole with a small CF value (−42.8% ± 14.8%). The proposed design guide based on the CF value and protrusive shapes can be used to achieve reliable resonant frequencies for high performance silicon resonators. (technical note)

  1. Pre-Proposal Assessment of Reliability for Spacecraft Docking with Limited Information

    Science.gov (United States)

    Brall, Aron

    2013-01-01

    This paper addresses the problem of estimating the reliability of a critical system function as well as its impact on the system reliability when limited information is available. The approach addresses the basic function reliability, and then the impact of multiple attempts to accomplish the function. The dependence of subsequent attempts on prior failure to accomplish the function is also addressed. The autonomous docking of two spacecraft was the specific example that generated the inquiry, and the resultant impact on total reliability generated substantial interest in presenting the results due to the relative insensitivity of overall performance to basic function reliability and moderate degradation given sufficient attempts to try and accomplish the required goal. The application of the methodology allows proper emphasis on the characteristics that can be estimated with some knowledge, and to insulate the integrity of the design from those characteristics that can't be properly estimated with any rational value of uncertainty. The nature of NASA's missions contains a great deal of uncertainty due to the pursuit of new science or operations. This approach can be applied to any function where multiple attempts at success, with or without degradation, are allowed.

  2. 18 CFR 39.5 - Reliability Standards.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Reliability Standards... RELIABILITY STANDARDS § 39.5 Reliability Standards. (a) The Electric Reliability Organization shall file each Reliability Standard or modification to a Reliability Standard that it proposes to be made effective under...

  3. A novel double patterning approach for 30nm dense holes

    Science.gov (United States)

    Hsu, Dennis Shu-Hao; Wang, Walter; Hsieh, Wei-Hsien; Huang, Chun-Yen; Wu, Wen-Bin; Shih, Chiang-Lin; Shih, Steven

    2011-04-01

    Double Patterning Technology (DPT) was commonly accepted as the major workhorse beyond water immersion lithography for sub-38nm half-pitch line patterning before the EUV production. For dense hole patterning, classical DPT employs self-aligned spacer deposition and uses the intersection of horizontal and vertical lines to define the desired hole patterns. However, the increase in manufacturing cost and process complexity is tremendous. Several innovative approaches have been proposed and experimented to address the manufacturing and technical challenges. A novel process of double patterned pillars combined image reverse will be proposed for the realization of low cost dense holes in 30nm node DRAM. The nature of pillar formation lithography provides much better optical contrast compared to the counterpart hole patterning with similar CD requirements. By the utilization of a reliable freezing process, double patterned pillars can be readily implemented. A novel image reverse process at the last stage defines the hole patterns with high fidelity. In this paper, several freezing processes for the construction of the double patterned pillars were tested and compared, and 30nm double patterning pillars were demonstrated successfully. A variety of different image reverse processes will be investigated and discussed for their pros and cons. An economic approach with the optimized lithography performance will be proposed for the application of 30nm DRAM node.

  4. Short- and long-term reliability of adult recall of vegetarian dietary patterns in the Adventist Health Study-2 (AHS-2).

    Science.gov (United States)

    Teixeira Martins, Marcia C; Jaceldo-Siegl, Karen; Fan, Jing; Singh, Pramil; Fraser, Gary E

    2015-01-01

    Past dietary patterns may be more important than recent dietary patterns in the aetiology of chronic diseases because of the long latency in their development. We developed an instrument to recall vegetarian dietary patterns during the lifetime and examined its reliability of recall over 5·3 and 32·6 years on average. The short-term/5-year recall ability study (5-RAS) was done using 24 690 participants from the cohort of the Adventist Health Study-2 (mean age 62·2 years). The long-term/33-year recall ability study (33-RAS) included an overlap population of 1721 individuals who joined the Adventist Health Study-1 and Adventist Health Study-2 (mean age 72·5 years). Spearman correlation coefficients for recall of vegetarian status were 0·78 and 0·72 for the 5-RAS and 33-RAS, respectively, when compared with 'reference' data. For both time periods sensitivity and positive predictive values were highest for the lacto-ovo-vegetarian and non-vegetarian patterns (vegans, lacto-ovo-vegetarians, pesco-vegetarians, semi-vegetarians and non-vegetarians). In the 5-RAS analyses, male, non-black, younger, and more educated participants, lifetime Adventists, and those with more stability of consumption of animal products generally showed higher recall ability. Somewhat similar tendencies were shown for the 33-RAS analyses. Our findings show that the instrument has higher reliability for recalled lacto-ovo-vegetarian and non-vegetarian than for vegan, semi- and pesco-vegetarian dietary patterns in both short- and long-term recalls. This is in part because these last dietary patterns were greatly contaminated by recalls that correctly would have belonged in the adjoining category that consumed more animal products.

  5. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  6. Study on seismic reliability for foundation grounds and surrounding slopes of nuclear power plants. Proposal of evaluation methodology and integration of seismic reliability evaluation system

    International Nuclear Information System (INIS)

    Ohtori, Yasuki; Kanatani, Mamoru

    2006-01-01

    This paper proposes an evaluation methodology of annual probability of failure for soil structures subjected to earthquakes and integrates the analysis system for seismic reliability of soil structures. The method is based on margin analysis, that evaluates the ground motion level at which structure is damaged. First, ground motion index that is strongly correlated with damage or response of the specific structure, is selected. The ultimate strength in terms of selected ground motion index is then evaluated. Next, variation of soil properties is taken into account for the evaluation of seismic stability of structures. The variation of the safety factor (SF) is evaluated and then the variation is converted into the variation of the specific ground motion index. Finally, the fragility curve is developed and then the annual probability of failure is evaluated combined with seismic hazard curve. The system facilitates the assessment of seismic reliability. A generator of random numbers, dynamic analysis program and stability analysis program are incorporated into one package. Once we define a structural model, distribution of the soil properties, input ground motions and so forth, list of safety factors for each sliding line is obtained. Monte Carlo Simulation (MCS), Latin Hypercube Sampling (LHS), point estimation method (PEM) and first order second moment (FOSM) implemented in this system are also introduced. As numerical examples, a ground foundation and a surrounding slope are assessed using the proposed method and the integrated system. (author)

  7. Reliability analysis under epistemic uncertainty

    International Nuclear Information System (INIS)

    Nannapaneni, Saideep; Mahadevan, Sankaran

    2016-01-01

    This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.

  8. An overall methodology for reliability prediction of mechatronic systems design with industrial application

    International Nuclear Information System (INIS)

    Habchi, Georges; Barthod, Christine

    2016-01-01

    We propose in this paper an overall ten-step methodology dedicated to the analysis and quantification of reliability during the design phase of a mechatronic system, considered as a complex system. The ten steps of the methodology are detailed according to the downward side of the V-development cycle usually used for the design of complex systems. Two main phases of analysis are complementary and cover the ten steps, qualitative analysis and quantitative analysis. The qualitative phase proposes to analyze the functional and dysfunctional behavior of the system and then determine its different failure modes and degradation states, based on external and internal functional analysis, organic and physical implementation, and dependencies between components, with consideration of customer specifications and mission profile. The quantitative phase is used to calculate the reliability of the system and its components, based on the qualitative behavior patterns, and considering data gathering and processing and reliability targets. Systemic approach is used to calculate the reliability of the system taking into account: the different technologies of a mechatronic system (mechanics, electronics, electrical .), dependencies and interactions between components and external influencing factors. To validate the methodology, the ten steps are applied to an industrial system, the smart actuator of Pack'Aero Company. - Highlights: • A ten-step methodology for reliability prediction of mechatronic systems design. • Qualitative and quantitative analysis for reliability evaluation using PN and RBD. • A dependency matrix proposal, based on the collateral and functional interactions. • Models consider mission profile, deterioration, interactions and influent factors. • Application and validation of the methodology on the “Smart Actuator” of PACK’AERO.

  9. Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier

    We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the c......We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...

  10. Irradiation Pattern Analysis for Designing Light Sources-Based on Light Emitting Diodes

    International Nuclear Information System (INIS)

    Rojas, E.; Stolik, S.; La Rosa, J. de; Valor, A.

    2016-01-01

    Nowadays it is possible to design light sources with a specific irradiation pattern for many applications. Light Emitting Diodes present features like high luminous efficiency, durability, reliability, flexibility, among others as the result of its rapid development. In this paper the analysis of the irradiation pattern of the light emitting diodes is presented. The approximation of these irradiation patterns to both, a Lambertian, as well as a Gaussian functions for the design of light sources is proposed. Finally, the obtained results and the functionality of bringing the irradiation pattern of the light emitting diodes to these functions are discussed. (Author)

  11. Space Station Freedom power - A reliability, availability, and maintainability assessment of the proposed Space Station Freedom electric power system

    Science.gov (United States)

    Turnquist, S. R.; Twombly, M.; Hoffman, D.

    1989-01-01

    A preliminary reliability, availability, and maintainability (RAM) analysis of the proposed Space Station Freedom electric power system (EPS) was performed using the unit reliability, availability, and maintainability (UNIRAM) analysis methodology. Orbital replacement units (ORUs) having the most significant impact on EPS availability measures were identified. Also, the sensitivity of the EPS to variations in ORU RAM data was evaluated for each ORU. Estimates were made of average EPS power output levels and availability of power to the core area of the space station. The results of assessments of the availability of EPS power and power to load distribution points in the space stations are given. Some highlights of continuing studies being performed to understand EPS availability considerations are presented.

  12. Reliability of Wireless Sensor Networks

    Science.gov (United States)

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2014-01-01

    Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

  13. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    OpenAIRE

    Hai An; Ling Zhou; Hui Sun

    2016-01-01

    Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new...

  14. A proposed heuristic methodology for searching reloading pattern

    International Nuclear Information System (INIS)

    Choi, K. Y.; Yoon, Y. K.

    1993-01-01

    A new heuristic method for loading pattern search has been developed to overcome shortcomings of the algorithmic approach. To reduce the size of vast solution space, general shuffling rules, a regionwise shuffling method, and a pattern grouping method were introduced. The entropy theory was applied to classify possible loading patterns into groups with similarity between them. The pattern search program was implemented with use of the PROLOG language. A two-group nodal code MEDIUM-2D was used for analysis of power distribution in the core. The above mentioned methodology has been tested to show effectiveness in reducing of solution space down to a few hundred pattern groups. Burnable poison rods were then arranged in each pattern group in accordance with burnable poison distribution rules, which led to further reduction of the solution space to several scores of acceptable pattern groups. The method of maximizing cycle length (MCL) and minimizing power-peaking factor (MPF) were applied to search for specific useful loading patterns from the acceptable pattern groups. Thus, several specific loading patterns that have low power-peaking factor and large cycle length were successfully searched from the selected pattern groups. (Author)

  15. Investment in new product reliability

    International Nuclear Information System (INIS)

    Murthy, D.N.P.; Rausand, M.; Virtanen, S.

    2009-01-01

    Product reliability is of great importance to both manufacturers and customers. Building reliability into a new product is costly, but the consequences of inadequate product reliability can be costlier. This implies that manufacturers need to decide on the optimal investment in new product reliability by achieving a suitable trade-off between the two costs. This paper develops a framework and proposes an approach to help manufacturers decide on the investment in new product reliability.

  16. Safety and reliability criteria

    International Nuclear Information System (INIS)

    O'Neil, R.

    1978-01-01

    Nuclear power plants and, in particular, reactor pressure boundary components have unique reliability requirements, in that usually no significant redundancy is possible, and a single failure can give rise to possible widespread core damage and fission product release. Reliability may be required for availability or safety reasons, but in the case of the pressure boundary and certain other systems safety may dominate. Possible Safety and Reliability (S and R) criteria are proposed which would produce acceptable reactor design. Without some S and R requirement the designer has no way of knowing how far he must go in analysing his system or component, or whether his proposed solution is likely to gain acceptance. The paper shows how reliability targets for given components and systems can be individually considered against the derived S and R criteria at the design and construction stage. Since in the case of nuclear pressure boundary components there is often very little direct experience on which to base reliability studies, relevant non-nuclear experience is examined. (author)

  17. Development of a Diagnostic Inventory for Mental Health Pattern (MHP) : Reliability and Validity of the MHP Scale.

    OpenAIRE

    橋本, 公雄; 徳永, 幹雄

    1999-01-01

    The purpose of this study was to develop the Mental Health Pattern (MHP), a scale that classifies state of mental health as it pertains to stress and quality of life (QOL), and to confirm the reliability and validity of this scale. To achieve these ends, a 70-item questionnaire was administered to student (n=256) and adult (n=172) samples consisting of males and females. Factor analysis revealed that stress and QOL consisted of 13 factors. Based on these factors, the MHP was designed with...

  18. Subtyping of Toddlers with ASD Based on Patterns of Social Attention Deficits

    Science.gov (United States)

    2015-10-30

    1 AWARD NUMBER: W81XWH-13-1-0179 TITLE: “Subtyping of Toddlers with ASD Based on Patterns of Social Attention Deficits” PRINCIPAL INVESTIGATOR...SUBTITLE “Subtyping of Toddlers with ASD Based on Patterns of Social Attention Deficits” 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-13-1-0179 5c...proposed project is to elucidate the factors that affect spontaneous dyadic orienting at the earliest stages when ASD can be reliably diagnosed in order

  19. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    Science.gov (United States)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  20. 75 FR 14386 - Interpretation of Transmission Planning Reliability Standard

    Science.gov (United States)

    2010-03-25

    ...] Interpretation of Transmission Planning Reliability Standard March 18, 2010. AGENCY: Federal Energy Regulatory... transmission planning Reliability Standard TPL-002-0 provides that planning authorities and transmission... Reliability Standard TPL-002-0. In this order, the Commission proposes to reject NERC's proposed...

  1. Rainfall Reliability Evaluation for Stability of Municipal Solid Waste Landfills on Slope

    Directory of Open Access Journals (Sweden)

    Fu-Kuo Huang

    2013-01-01

    Full Text Available A method to assess the reliability for the stability of municipal solid waste (MSW landfills on slope due to rainfall infiltration is proposed. Parameter studies are first done to explore the influence of factors on the stability of MSW. These factors include rainfall intensity, duration, pattern, and the engineering properties of MSW. Then 100 different combinations of parameters are generated and associated stability analyses of MSW on slope are performed assuming that each parameter is uniform distributed around its reason ranges. In the following, the performance of the stability of MSW is interpreted by the artificial neural network (ANN trained and verified based on the aforementioned 100 analysis results. The reliability for the stability of MSW landfills on slope is then evaluated and explored for different rainfall parameters by the ANN model with first-order reliability method (FORM and Monte Carlo simulation (MCS.

  2. Reliability evaluation for offshore wind farms

    DEFF Research Database (Denmark)

    Zhao, Menghua; Blåbjerg, Frede; Chen, Zhe

    2005-01-01

    In this paper, a new reliability index - Loss Of Generation Ratio Probability (LOGRP) is proposed for evaluating the reliability of an electrical system for offshore wind farms, which emphasizes the design of wind farms rather than the adequacy for specific load demand. A practical method...... to calculate LOGRP of offshore wind farms is proposed and evaluated....

  3. INTERACTIONS BETWEEN MODULATED LUMINANCE PATTERNS AND RANDOM-DOT PATTERNS

    NARCIS (Netherlands)

    CORNELISSEN, FW; KOOIJMAN, AC

    1994-01-01

    It has been suggested that density modulated random-dot patterns can be used to study higher order pattern vision [Van Meeteren and Barlow (1981) Vision Research, 21, 765-777]. The high contrast dots of which the pattern is composed, are assumed to be reliably transduced-and transmitted by the lower

  4. Time-dependent reliability sensitivity analysis of motion mechanisms

    International Nuclear Information System (INIS)

    Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng

    2016-01-01

    Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.

  5. Construct validity and reliability of a checklist for volleyball serve analysis

    Directory of Open Access Journals (Sweden)

    Cicero Luciano Alves Costa

    2018-03-01

    Full Text Available This study aims to investigate the construct validity and reliability of the checklist for qualitative analysis of the overhand serve in Volleyball. Fifty-five male subjects aged 13-17 years participated in the study. The overhand serve was analyzed using the checklist proposed by Meira Junior (2003, which analyzes the pattern of serve movement in four phases: (I initial position, (II ball lifting, (III ball attacking, and (IV finalization. Construct validity was analyzed using confirmatory factorial analysis and reliability through the Cronbach’s alpha coefficient. The construct validity was supported by confirmatory factor analysis with the RMSEA results (0.037 [confidence interval 90% = 0.020-0.040], CFI (0.970 and TLI (0.950 indicating good fit of the model. In relation to reliability, Cronbach’s alpha coefficient was 0.661, being this value considered acceptable. Among the items on the checklist, ball lifting and attacking showed higher factor loadings, 0.69 and 0.99, respectively. In summary, the checklist for the qualitative analysis of the overhand serve of Meira Junior (2003 can be considered a valid and reliable instrument for use in research in the field of Sports Sciences.

  6. Mining a database of single amplified genomes from Red Sea brine pool extremophiles—improving reliability of gene function prediction using a profile and pattern matching algorithm (PPMA)

    Science.gov (United States)

    Grötzinger, Stefan W.; Alam, Intikhab; Ba Alawi, Wail; Bajic, Vladimir B.; Stingl, Ulrich; Eppinger, Jörg

    2014-01-01

    Reliable functional annotation of genomic data is the key-step in the discovery of novel enzymes. Intrinsic sequencing data quality problems of single amplified genomes (SAGs) and poor homology of novel extremophile's genomes pose significant challenges for the attribution of functions to the coding sequences identified. The anoxic deep-sea brine pools of the Red Sea are a promising source of novel enzymes with unique evolutionary adaptation. Sequencing data from Red Sea brine pool cultures and SAGs are annotated and stored in the Integrated Data Warehouse of Microbial Genomes (INDIGO) data warehouse. Low sequence homology of annotated genes (no similarity for 35% of these genes) may translate into false positives when searching for specific functions. The Profile and Pattern Matching (PPM) strategy described here was developed to eliminate false positive annotations of enzyme function before progressing to labor-intensive hyper-saline gene expression and characterization. It utilizes InterPro-derived Gene Ontology (GO)-terms (which represent enzyme function profiles) and annotated relevant PROSITE IDs (which are linked to an amino acid consensus pattern). The PPM algorithm was tested on 15 protein families, which were selected based on scientific and commercial potential. An initial list of 2577 enzyme commission (E.C.) numbers was translated into 171 GO-terms and 49 consensus patterns. A subset of INDIGO-sequences consisting of 58 SAGs from six different taxons of bacteria and archaea were selected from six different brine pool environments. Those SAGs code for 74,516 genes, which were independently scanned for the GO-terms (profile filter) and PROSITE IDs (pattern filter). Following stringent reliability filtering, the non-redundant hits (106 profile hits and 147 pattern hits) are classified as reliable, if at least two relevant descriptors (GO-terms and/or consensus patterns) are present. Scripts for annotation, as well as for the PPM algorithm, are available

  7. Mining a database of single amplified genomes from Red Sea brine pool extremophiles-improving reliability of gene function prediction using a profile and pattern matching algorithm (PPMA).

    KAUST Repository

    Grötzinger, Stefan W.

    2014-04-07

    Reliable functional annotation of genomic data is the key-step in the discovery of novel enzymes. Intrinsic sequencing data quality problems of single amplified genomes (SAGs) and poor homology of novel extremophile\\'s genomes pose significant challenges for the attribution of functions to the coding sequences identified. The anoxic deep-sea brine pools of the Red Sea are a promising source of novel enzymes with unique evolutionary adaptation. Sequencing data from Red Sea brine pool cultures and SAGs are annotated and stored in the Integrated Data Warehouse of Microbial Genomes (INDIGO) data warehouse. Low sequence homology of annotated genes (no similarity for 35% of these genes) may translate into false positives when searching for specific functions. The Profile and Pattern Matching (PPM) strategy described here was developed to eliminate false positive annotations of enzyme function before progressing to labor-intensive hyper-saline gene expression and characterization. It utilizes InterPro-derived Gene Ontology (GO)-terms (which represent enzyme function profiles) and annotated relevant PROSITE IDs (which are linked to an amino acid consensus pattern). The PPM algorithm was tested on 15 protein families, which were selected based on scientific and commercial potential. An initial list of 2577 enzyme commission (E.C.) numbers was translated into 171 GO-terms and 49 consensus patterns. A subset of INDIGO-sequences consisting of 58 SAGs from six different taxons of bacteria and archaea were selected from six different brine pool environments. Those SAGs code for 74,516 genes, which were independently scanned for the GO-terms (profile filter) and PROSITE IDs (pattern filter). Following stringent reliability filtering, the non-redundant hits (106 profile hits and 147 pattern hits) are classified as reliable, if at least two relevant descriptors (GO-terms and/or consensus patterns) are present. Scripts for annotation, as well as for the PPM algorithm, are available

  8. Non-destructive Reliability Evaluation of Electronic Device by ESPI

    International Nuclear Information System (INIS)

    Yoon, Sung Un; Kim, Koung Suk; Kang, Ki Soo; Jo, Seon Hyung

    2001-01-01

    This paper propose electronic speckle pattern interferometry(ESPI) for reliability evaluation of electronic device. Especially, vibration problem in a fan of air conditioner, motor of washing machine and etc. is important factor to design the devices. But, it is difficult to apply previous method, accelerometer to the devices with complex geometry. ESPI, non-contact measurement technique applies a commercial fan of air conditioner to vibration analysis. Vibration mode shapes, natural frequency and the range of the frequency are decided and compared with that of FEM analysis. In mechanical deign of new product, ESPI adds weak point of previous method to supply effective design information

  9. Patterns recognition of electric brain activity using artificial neural networks

    Science.gov (United States)

    Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.

    2017-04-01

    An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.

  10. Field reliability of electronic systems

    International Nuclear Information System (INIS)

    Elm, T.

    1984-02-01

    This report investigates, through several examples from the field, the reliability of electronic units in a broader sense. That is, it treats not just random parts failure, but also inadequate reliability design and (externally and internally) induced failures. The report is not meant to be merely an indication of the state of the art for the reliability prediction methods we know, but also as a contribution to the investigation of man-machine interplay in the operation and repair of electronic equipment. The report firmly links electronics reliability to safety and risk analyses approaches with a broader, system oriented view of reliability prediction and with postfailure stress analysis. It is intended to reveal, in a qualitative manner, the existence of symptom and cause patterns. It provides a background for further investigations to identify the detailed mechanisms of the faults and the remedical actions and precautions for achieving cost effective reliability. (author)

  11. Prediction of safety critical software operational reliability from test reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1999-01-01

    It has been a critical issue to predict the safety critical software reliability in nuclear engineering area. For many years, many researches have focused on the quantification of software reliability and there have been many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is however on the operational reliability rather than on the test reliability. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, from testing to operation, testing environment factors comprising the aging factor and the coverage factor are developed in this paper and used to predict the ultimate operational reliability with the failure data in testing phase. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results show that the proposed method can estimate the operational reliability accurately. (Author). 14 refs., 1 tab., 1 fig

  12. Development of reliable pavement models.

    Science.gov (United States)

    2011-05-01

    The current report proposes a framework for estimating the reliability of a given pavement structure as analyzed by : the Mechanistic-Empirical Pavement Design Guide (MEPDG). The methodology proposes using a previously fit : response surface, in plac...

  13. Interrater reliability of a Pilates movement-based classification system.

    Science.gov (United States)

    Yu, Kwan Kenny; Tulloch, Evelyn; Hendrick, Paul

    2015-01-01

    To determine the interrater reliability for identification of a specific movement pattern using a Pilates Classification system. Videos of 5 subjects performing specific movement tasks were sent to raters trained in the DMA-CP classification system. Ninety-six raters completed the survey. Interrater reliability for the detection of a directional bias was excellent (Pi = 0.92, and K(free) = 0.89). Interrater reliability for classifying an individual into a specific subgroup was moderate (Pi = 0.64, K(free) = 0.55) however raters who had completed levels 1-4 of the DMA-CP training and reported using the assessment daily demonstrated excellent reliability (Pi = 0.89 and K(free) = 0.87). The reliability of the classification system demonstrated almost perfect agreement in determining the existence of a specific movement pattern and classifying into a subgroup for experienced raters. There was a trend for greater reliability associated with increased levels of training and experience of the raters. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Reliability and mechanical design

    International Nuclear Information System (INIS)

    Lemaire, Maurice

    1997-01-01

    A lot of results in mechanical design are obtained from a modelisation of physical reality and from a numerical solution which would lead to the evaluation of needs and resources. The goal of the reliability analysis is to evaluate the confidence which it is possible to grant to the chosen design through the calculation of a probability of failure linked to the retained scenario. Two types of analysis are proposed: the sensitivity analysis and the reliability analysis. Approximate methods are applicable to problems related to reliability, availability, maintainability and safety (RAMS)

  15. Software reliability prediction using SPN | Abbasabadee | Journal of ...

    African Journals Online (AJOL)

    Software reliability prediction using SPN. ... In this research for computation of software reliability, component reliability model based on SPN would be proposed. An isomorphic markov ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  16. Reliability evaluation of deregulated electric power systems for planning applications

    International Nuclear Information System (INIS)

    Ehsani, A.; Ranjbar, A.M.; Jafari, A.; Fotuhi-Firuzabad, M.

    2008-01-01

    In a deregulated electric power utility industry in which a competitive electricity market can influence system reliability, market risks cannot be ignored. This paper (1) proposes an analytical probabilistic model for reliability evaluation of competitive electricity markets and (2) develops a methodology for incorporating the market reliability problem into HLII reliability studies. A Markov state space diagram is employed to evaluate the market reliability. Since the market is a continuously operated system, the concept of absorbing states is applied to it in order to evaluate the reliability. The market states are identified by using market performance indices and the transition rates are calculated by using historical data. The key point in the proposed method is the concept that the reliability level of a restructured electric power system can be calculated using the availability of the composite power system (HLII) and the reliability of the electricity market. Two case studies are carried out over Roy Billinton Test System (RBTS) to illustrate interesting features of the proposed methodology

  17. APPLICATION OF TRAVEL TIME RELIABILITY FOR PERFORMANCE ORIENTED OPERATIONAL PLANNING OF EXPRESSWAYS

    Science.gov (United States)

    Mehran, Babak; Nakamura, Hideki

    Evaluation of impacts of congestion improvement scheme s on travel time reliability is very significant for road authorities since travel time reliability repr esents operational performance of expressway segments. In this paper, a methodology is presented to estimate travel tim e reliability prior to implementation of congestion relief schemes based on travel time variation modeling as a function of demand, capacity, weather conditions and road accident s. For subject expressway segmen ts, traffic conditions are modeled over a whole year considering demand and capacity as random variables. Patterns of demand and capacity are generated for each five minute interval by appl ying Monte-Carlo simulation technique, and accidents are randomly generated based on a model that links acci dent rate to traffic conditions. A whole year analysis is performed by comparing de mand and available capacity for each scenario and queue length is estimated through shockwave analysis for each time in terval. Travel times are estimated from refined speed-flow relationships developed for intercity expressways and buffer time index is estimated consequently as a measure of travel time reliability. For validation, estimated reliability indices are compared with measured values from empirical data, and it is shown that the proposed method is suitable for operational evaluation and planning purposes.

  18. Mining a database of single amplified genomes from Red Sea brine pool extremophiles – Improving reliability of gene function prediction using a profile and pattern matching algorithm (PPMA

    Directory of Open Access Journals (Sweden)

    Stefan Wolfgang Grötzinger

    2014-04-01

    Full Text Available Reliable functional annotation of genomic data is the key-step in the discovery of novel enzymes. Intrinsic sequencing data quality problems of single amplified genomes (SAGs and poor homology of novel extremophile’s genomes pose significant challenges for the attribution of functions to the coding sequences identified. The anoxic deep-sea brine pools of the Red Sea are a promising source of novel enzymes with unique evolutionary adaptation. Sequencing data from Red Sea brine pool cultures and SAGs are annotated and stored in the INDIGO data warehouse. Low sequence homology of annotated genes (no similarity for 35% of these genes may translate into false positives when searching for specific functions. The Profile & Pattern Matching (PPM strategy described here was developed to eliminate false positive annotations of enzyme function before progressing to labor-intensive hyper-saline gene expression and characterization. It utilizes InterPro-derived Gene Ontology (GO-terms (which represent enzyme function profiles and annotated relevant PROSITE IDs (which are linked to an amino acid consensus pattern. The PPM algorithm was tested on 15 protein families, which were selected based on scientific and commercial potential. An initial list of 2,577 E.C. numbers was translated into 171 GO-terms and 49 consensus patterns. A subset of INDIGO-sequences consisting of 58 SAGs from six different taxons of bacteria and archaea were selected from 6 different brine pool environments. Those SAGs code for 74,516 genes, which were independently scanned for the GO-terms (profile filter and PROSITE IDs (pattern filter. Following stringent reliability filtering, the non-redundant hits (106 profile hits and 147 pattern hits are classified as reliable, if at least two relevant descriptors (GO-terms and/or consensus patterns are present. Scripts for annotation, as well as for the PPM algorithm, are available through the INDIGO website.

  19. Improvement of the reliability graph with general gates to analyze the reliability of dynamic systems that have various operation modes

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Seung Ki [Div. of Research Reactor System Design, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); No, Young Gyu; Seong, Poong Hyun [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2016-04-15

    The safety of nuclear power plants is analyzed by a probabilistic risk assessment, and the fault tree analysis is the most widely used method for a risk assessment with the event tree analysis. One of the well-known disadvantages of the fault tree is that drawing a fault tree for a complex system is a very cumbersome task. Thus, several graphical modeling methods have been proposed for the convenient and intuitive modeling of complex systems. In this paper, the reliability graph with general gates (RGGG) method, one of the intuitive graphical modeling methods based on Bayesian networks, is improved for the reliability analyses of dynamic systems that have various operation modes with time. A reliability matrix is proposed and it is explained how to utilize the reliability matrix in the RGGG for various cases of operation mode changes. The proposed RGGG with a reliability matrix provides a convenient and intuitive modeling of various operation modes of complex systems, and can also be utilized with dynamic nodes that analyze the failure sequences of subcomponents. The combinatorial use of a reliability matrix with dynamic nodes is illustrated through an application to a shutdown cooling system in a nuclear power plant.

  20. Improvement of the reliability graph with general gates to analyze the reliability of dynamic systems that have various operation modes

    International Nuclear Information System (INIS)

    Shin, Seung Ki; No, Young Gyu; Seong, Poong Hyun

    2016-01-01

    The safety of nuclear power plants is analyzed by a probabilistic risk assessment, and the fault tree analysis is the most widely used method for a risk assessment with the event tree analysis. One of the well-known disadvantages of the fault tree is that drawing a fault tree for a complex system is a very cumbersome task. Thus, several graphical modeling methods have been proposed for the convenient and intuitive modeling of complex systems. In this paper, the reliability graph with general gates (RGGG) method, one of the intuitive graphical modeling methods based on Bayesian networks, is improved for the reliability analyses of dynamic systems that have various operation modes with time. A reliability matrix is proposed and it is explained how to utilize the reliability matrix in the RGGG for various cases of operation mode changes. The proposed RGGG with a reliability matrix provides a convenient and intuitive modeling of various operation modes of complex systems, and can also be utilized with dynamic nodes that analyze the failure sequences of subcomponents. The combinatorial use of a reliability matrix with dynamic nodes is illustrated through an application to a shutdown cooling system in a nuclear power plant

  1. Reliability analysis based on the losses from failures.

    Science.gov (United States)

    Todinov, M T

    2006-04-01

    early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.

  2. A methodology to incorporate organizational factors into human reliability analysis

    International Nuclear Information System (INIS)

    Li Pengcheng; Chen Guohua; Zhang Li; Xiao Dongsheng

    2010-01-01

    A new holistic methodology for Human Reliability Analysis (HRA) is proposed to model the effects of the organizational factors on the human reliability. Firstly, a conceptual framework is built, which is used to analyze the causal relationships between the organizational factors and human reliability. Then, the inference model for Human Reliability Analysis is built by combining the conceptual framework with Bayesian networks, which is used to execute the causal inference and diagnostic inference of human reliability. Finally, a case example is presented to demonstrate the specific application of the proposed methodology. The results show that the proposed methodology of combining the conceptual model with Bayesian Networks can not only easily model the causal relationship between organizational factors and human reliability, but in a given context, people can quantitatively measure the human operational reliability, and identify the most likely root causes or the prioritization of root causes caused human error. (authors)

  3. Reliability enhancement of portal frame structure by finite element synthesis

    International Nuclear Information System (INIS)

    Nakagiri, S.

    1989-01-01

    The stochastic finite element methods have been applied to the evaluation of structural response and reliability of uncertain structural systems. The structural reliability index of the advanced first-order second moment (AFOSM) method is a candidate of the measure of assessing structural safety and reliability. The reliability index can be evaluated when a baseline design of structures under interest is proposed and the covariance matrix of the probabilistic variables is acquired to represent uncertainties involved in the structure systems. The reliability index thus evaluated is not assured the largest one for the structure. There is left a possibility to enhance the structural reliability for the given covariance matrix by changing the baseline design. From such a viewpoint of structural optimization, some ideas have been proposed to maximize the reliability or to minimize the failure probability of uncertain structural systems. A method of changing the design is proposed to increase the reliability index from its baseline value to another desired value. The reliability index in this paper is calculated mainly by the method of Lagrange multiplier

  4. Understanding the Functionality of Human Activity Hotspots from Their Scaling Pattern Using Trajectory Data

    Directory of Open Access Journals (Sweden)

    Tao Jia

    2017-11-01

    Full Text Available Human activity hotspots are the clusters of activity locations in space and time, and a better understanding of their functionality would be useful for urban land use planning and transportation. In this article, using trajectory data, we aim to infer the functionality of human activity hotspots from their scaling pattern in a reliable way. Specifically, a large number of stopping locations are extracted from trajectory data, which are then aggregated into activity hotspots. Activity hotspots are found to display scaling patterns in terms of the sublinear scaling relationships between the number of stopping locations and the number of points of interest (POIs, which indicates economies of scale of human interactions with urban land use. Importantly, this scaling pattern remains stable over time. This finding inspires us to devise an allometric ruler to identify the activity hotspots, whose functionality could be reliably estimated using the stopping locations. Thereafter, a novel Bayesian inference model is proposed to infer their urban functionality, which examines the spatial and temporal information of stopping locations covering 75 days. Experimental results suggest that the functionality of identified activity hotspots are reliably inferred by stopping locations, such as the railway station.

  5. Can we improve accuracy and reliability of MRI interpretation in children with optic pathway glioma? Proposal for a reproducible imaging classification

    Energy Technology Data Exchange (ETDEWEB)

    Lambron, Julien; Frampas, Eric; Toulgoat, Frederique [University Hospital, Department of Radiology, Nantes (France); Rakotonjanahary, Josue [University Hospital, Department of Pediatric Oncology, Angers (France); University Paris Diderot, INSERM CIE5 Robert Debre Hospital, Assistance Publique-Hopitaux de Paris (AP-HP), Paris (France); Loisel, Didier [University Hospital, Department of Radiology, Angers (France); Carli, Emilie de; Rialland, Xavier [University Hospital, Department of Pediatric Oncology, Angers (France); Delion, Matthieu [University Hospital, Department of Neurosurgery, Angers (France)

    2016-02-15

    Magnetic resonance (MR) images from children with optic pathway glioma (OPG) are complex. We initiated this study to evaluate the accuracy of MR imaging (MRI) interpretation and to propose a simple and reproducible imaging classification for MRI. We randomly selected 140 MRIs from among 510 MRIs performed on 104 children diagnosed with OPG in France from 1990 to 2004. These images were reviewed independently by three radiologists (F.T., 15 years of experience in neuroradiology; D.L., 25 years of experience in pediatric radiology; and J.L., 3 years of experience in radiology) using a classification derived from the Dodge and modified Dodge classifications. Intra- and interobserver reliabilities were assessed using the Bland-Altman method and the kappa coefficient. These reviews allowed the definition of reliable criteria for MRI interpretation. The reviews showed intraobserver variability and large discrepancies among the three radiologists (kappa coefficient varying from 0.11 to 1). These variabilities were too large for the interpretation to be considered reproducible over time or among observers. A consensual analysis, taking into account all observed variabilities, allowed the development of a definitive interpretation protocol. Using this revised protocol, we observed consistent intra- and interobserver results (kappa coefficient varying from 0.56 to 1). The mean interobserver difference for the solid portion of the tumor with contrast enhancement was 0.8 cm{sup 3} (limits of agreement = -16 to 17). We propose simple and precise rules for improving the accuracy and reliability of MRI interpretation for children with OPG. Further studies will be necessary to investigate the possible prognostic value of this approach. (orig.)

  6. Fabric Weave Pattern and Yarn Color Recognition and Classification Using a Deep ELM Network

    Directory of Open Access Journals (Sweden)

    Babar Khan

    2017-10-01

    Full Text Available Usually, a fabric weave pattern is recognized using methods which identify the warp floats and weft floats. Although these methods perform well for uniform or repetitive weave patterns, in the case of complex weave patterns, these methods become computationally complex and the classification error rates are comparatively higher. Furthermore, the fault-tolerance (invariance and stability (selectivity of the existing methods are still to be enhanced. We present a novel biologically-inspired method to invariantly recognize the fabric weave pattern (fabric texture and yarn color from the color image input. We proposed a model in which the fabric weave pattern descriptor is based on the HMAX model for computer vision inspired by the hierarchy in the visual cortex, the color descriptor is based on the opponent color channel inspired by the classical opponent color theory of human vision, and the classification stage is composed of a multi-layer (deep extreme learning machine. Since the weave pattern descriptor, yarn color descriptor, and the classification stage are all biologically inspired, we propose a method which is completely biologically plausible. The classification performance of the proposed algorithm indicates that the biologically-inspired computer-aided-vision models might provide accurate, fast, reliable and cost-effective solution to industrial automation.

  7. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  8. The dynamics of visual experience, an EEG study of subjective pattern formation.

    Directory of Open Access Journals (Sweden)

    Mark A Elliott

    Full Text Available BACKGROUND: Since the origin of psychological science a number of studies have reported visual pattern formation in the absence of either physiological stimulation or direct visual-spatial references. Subjective patterns range from simple phosphenes to complex patterns but are highly specific and reported reliably across studies. METHODOLOGY/PRINCIPAL FINDINGS: Using independent-component analysis (ICA we report a reduction in amplitude variance consistent with subjective-pattern formation in ventral posterior areas of the electroencephalogram (EEG. The EEG exhibits significantly increased power at delta/theta and gamma-frequencies (point and circle patterns or a series of high-frequency harmonics of a delta oscillation (spiral patterns. CONCLUSIONS/SIGNIFICANCE: Subjective-pattern formation may be described in a way entirely consistent with identical pattern formation in fluids or granular flows. In this manner, we propose subjective-pattern structure to be represented within a spatio-temporal lattice of harmonic oscillations which bind topographically organized visual-neuronal assemblies by virtue of low frequency modulation.

  9. On the NPP structural reliability

    International Nuclear Information System (INIS)

    Klemin, A.I.; Polyakov, E.F.

    1980-01-01

    Reviewed are the main statements peculiarities and possibilities of the first branch guiding technical material (GTM) ''The methods of calculation of structural reliability of NPP and its systems at the stage of projecting''. It is stated, that in GTM presented are recomendations on the calculation of reliability of such specific systems, as the system of the reactor control and protection the system of measuring instruments and automatics and safe systems. GTM are based on analytical methods of modern theory of realibility with the Use of metodology of minimal cross sections of complex systems. It is stressed, that the calculations on the proposed methods permit to calculate a wide complex of reliability parameters, reflecting separately or together prorerties of NPP dependability and maintainability. For NPP, operating by a variable schedule of leading, aditionally considered are parameters, characterizing reliability with account of the proposed regime of power change, i.e. taking into account failures, caused by decrease of the obtained power lower, than the reguired or increase of the required power higher, than the obtained

  10. Reliability improvement of multiversion software by exchanging modules

    International Nuclear Information System (INIS)

    Shima, Kazuyuki; Matsumoto, Ken-ichi; Torii, Koji

    1996-01-01

    In this paper, we proposes a method to improve reliability of multiversion software. In CER proposed in, checkpoints are put in versions of program and errors of versions are detected and recovered at the checkpoints. It prevent versions from failing and improve the reliability of multiversion software. But it is point out that CER decreases the reliability of the multiversion software if the detection and recovery of errors are assumed to be able to fail. In the method proposed in this paper, versions of program are developed following the same module specifications. When failures of versions of program are detected, faulty modules are identified and replaced them to other modules. It create versions without faulty modules and improve the reliability of multiversion software. The failure probability of multiversion software is estimated to become about a hundredth of the failure probability by the proposed method where the failure probability of each version is 0.000698, the number of versions is 5 and the number of modules is 20. (author)

  11. ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES

    Science.gov (United States)

    LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.

    2008-01-01

    Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508

  12. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  13. 78 FR 44475 - Protection System Maintenance Reliability Standard

    Science.gov (United States)

    2013-07-24

    ... that the performance or product has some reliability-related value, then the requirement will have...] Protection System Maintenance Reliability Standard AGENCY: Federal Energy Regulatory Commission, Energy... Commission proposes to approve a revised Reliability Standard, PRC-005- 2--Protection System Maintenance, to...

  14. Is the rearfoot pattern the most frequently foot strike pattern among recreational shod distance runners?

    Science.gov (United States)

    de Almeida, Matheus Oliveira; Saragiotto, Bruno Tirotti; Yamato, Tiê Parma; Lopes, Alexandre Dias

    2015-02-01

    To determine the distribution of the foot strike patterns among recreational shod runners and to compare the personal and training characteristics between runners with different foot strike patterns. Cross-sectional study. Areas of running practice in São Paulo, Brazil. 514 recreational shod runners older than 18 years and free of injury. Foot strike patterns were evaluated with a high-speed camera (250 Hz) and photocells to assess the running speed of participants. Personal and training characteristics were collected through a questionnaire. The inter-rater reliability of the visual foot strike pattern classification method was 96.7% and intra-rater reliability was 98.9%. 95.1% (n = 489) of the participants were rearfoot strikers, 4.1% (n = 21) were midfoot strikers, and four runners (0.8%) were forefoot strikers. There were no significant differences between strike patterns for personal and training characteristics. This is the first study to demonstrate that almost all recreational shod runners were rearfoot strikers. The visual method of evaluation seems to be a reliable and feasible option to classify foot strike pattern. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Reliability-based performance simulation for optimized pavement maintenance

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Le, Thanh-Son

    2011-01-01

    Roadway pavement maintenance is essential for driver safety and highway infrastructure efficiency. However, regular preventive maintenance and rehabilitation (M and R) activities are extremely costly. Unfortunately, the funds available for the M and R of highway pavement are often given lower priority compared to other national development policies, therefore, available funds must be allocated wisely. Maintenance strategies are typically implemented by optimizing only the cost whilst the reliability of facility performance is neglected. This study proposes a novel algorithm using multi-objective particle swarm optimization (MOPSO) technique to evaluate the cost-reliability tradeoff in a flexible maintenance strategy based on non-dominant solutions. Moreover, a probabilistic model for regression parameters is employed to assess reliability-based performance. A numerical example of a highway pavement project is illustrated to demonstrate the efficacy of the proposed MOPSO algorithms. The analytical results show that the proposed approach can help decision makers to optimize roadway maintenance plans. - Highlights: →A novel algorithm using multi-objective particle swarm optimization technique. → Evaluation of the cost-reliability tradeoff in a flexible maintenance strategy. → A probabilistic model for regression parameters is employed to assess reliability-based performance. → The proposed approach can help decision makers to optimize roadway maintenance plans.

  16. Reliability-based performance simulation for optimized pavement maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jui-Sheng, E-mail: jschou@mail.ntust.edu.tw [Department of Construction Engineering, National Taiwan University of Science and Technology (Taiwan Tech), 43 Sec. 4, Keelung Rd., Taipei 106, Taiwan (China); Le, Thanh-Son [Department of Construction Engineering, National Taiwan University of Science and Technology (Taiwan Tech), 43 Sec. 4, Keelung Rd., Taipei 106, Taiwan (China)

    2011-10-15

    Roadway pavement maintenance is essential for driver safety and highway infrastructure efficiency. However, regular preventive maintenance and rehabilitation (M and R) activities are extremely costly. Unfortunately, the funds available for the M and R of highway pavement are often given lower priority compared to other national development policies, therefore, available funds must be allocated wisely. Maintenance strategies are typically implemented by optimizing only the cost whilst the reliability of facility performance is neglected. This study proposes a novel algorithm using multi-objective particle swarm optimization (MOPSO) technique to evaluate the cost-reliability tradeoff in a flexible maintenance strategy based on non-dominant solutions. Moreover, a probabilistic model for regression parameters is employed to assess reliability-based performance. A numerical example of a highway pavement project is illustrated to demonstrate the efficacy of the proposed MOPSO algorithms. The analytical results show that the proposed approach can help decision makers to optimize roadway maintenance plans. - Highlights: > A novel algorithm using multi-objective particle swarm optimization technique. > Evaluation of the cost-reliability tradeoff in a flexible maintenance strategy. > A probabilistic model for regression parameters is employed to assess reliability-based performance. > The proposed approach can help decision makers to optimize roadway maintenance plans.

  17. 75 FR 35689 - System Personnel Training Reliability Standards

    Science.gov (United States)

    2010-06-23

    ... planning staff at control areas and reliability coordinators concerning power system characteristics and... Coordination--Staffing). \\11\\ Mandatory Reliability Standards for the Bulk-Power System, Order No. 693, Federal... American bulk electric system are competent to perform those reliability-related tasks.\\22\\ The proposed...

  18. A Bayesian approach to degradation-based burn-in optimization for display products exhibiting two-phase degradation patterns

    International Nuclear Information System (INIS)

    Yuan, Tao; Bae, Suk Joo; Zhu, Xiaoyan

    2016-01-01

    Motivated by the two-phase degradation phenomena observed in light displays (e.g., plasma display panels (PDPs), organic light emitting diodes (OLEDs)), this study proposes a new degradation-based burn-in testing plan for display products exhibiting two-phase degradation patterns. The primary focus of the burn-in test in this study is to eliminate the initial rapid degradation phase, while the major purpose of traditional burn-in tests is to detect and eliminate early failures from weak units. A hierarchical Bayesian bi-exponential model is used to capture two-phase degradation patterns of the burn-in population. Mission reliability and total cost are introduced as planning criteria. The proposed burn-in approach accounts for unit-to-unit variability within the burn-in population, and uncertainty concerning the model parameters, mainly in the hierarchical Bayesian framework. Available pre-burn-in data is conveniently incorporated into the burn-in decision-making procedure. A practical example of PDP degradation data is used to illustrate the proposed methodology. The proposed method is compared to other approaches such as the maximum likelihood method or the change-point regression. - Highlights: • We propose a degradation-based burn-in test for products with two-phase degradation. • Mission reliability and total cost are used as planning criteria. • The proposed burn-in approach is built within the hierarchical Bayesian framework. • A practical example was used to illustrate the proposed methodology.

  19. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  20. Multiobjective Reliable Cloud Storage with Its Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Xiyang Liu

    2016-01-01

    Full Text Available Information abounds in all fields of the real life, which is often recorded as digital data in computer systems and treated as a kind of increasingly important resource. Its increasing volume growth causes great difficulties in both storage and analysis. The massive data storage in cloud environments has significant impacts on the quality of service (QoS of the systems, which is becoming an increasingly challenging problem. In this paper, we propose a multiobjective optimization model for the reliable data storage in clouds through considering both cost and reliability of the storage service simultaneously. In the proposed model, the total cost is analyzed to be composed of storage space occupation cost, data migration cost, and communication cost. According to the analysis of the storage process, the transmission reliability, equipment stability, and software reliability are taken into account in the storage reliability evaluation. To solve the proposed multiobjective model, a Constrained Multiobjective Particle Swarm Optimization (CMPSO algorithm is designed. At last, experiments are designed to validate the proposed model and its solution PSO algorithm. In the experiments, the proposed model is tested in cooperation with 3 storage strategies. Experimental results show that the proposed model is positive and effective. The experimental results also demonstrate that the proposed model can perform much better in alliance with proper file splitting methods.

  1. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  2. Risk assessment and reliability for low level radioactive waste disposal

    International Nuclear Information System (INIS)

    Gregory, P.O.; Jones, G.A.

    1986-01-01

    The reliability of critical design features at low-level radioactive waste disposal facilities is a major concern in the licensing of these structures. To date, no systematic methodology has been adopted to evaluate the geotechnical reliability of Uranium Mill Tailings Remedial Action (UMTRA) disposal facilities currently being designed and/or constructed. This paper discusses and critiques the deterministic methods currently used to evaluate UMTRA reliability. Because deterministic methods may not be applicable in some cases because of the unusually long design life of UMTRA facilities, it is proposed that a probabilistic risk assessment-based methodology be used as a secondary method to aid in the evaluating of geotechnical reliability of critical items. Similar methodologies have proven successful in evaluating the reliability of a variety of conventional earth structures. In this paper, an ''acceptable'' level of risk for UMTRA facilities is developed, an evaluation method is presented, and two example applications of the proposed methodology are provided for a generic UMTRA disposal facility. The proposed technique is shown to be a simple method which might be used to aid in reliability evaluations on a selective basis. Finally, other possible applications and the limitations of the proposed methodology are discussed

  3. Validation of Land Cover Products Using Reliability Evaluation Methods

    Directory of Open Access Journals (Sweden)

    Wenzhong Shi

    2015-06-01

    Full Text Available Validation of land cover products is a fundamental task prior to data applications. Current validation schemes and methods are, however, suited only for assessing classification accuracy and disregard the reliability of land cover products. The reliability evaluation of land cover products should be undertaken to provide reliable land cover information. In addition, the lack of high-quality reference data often constrains validation and affects the reliability results of land cover products. This study proposes a validation schema to evaluate the reliability of land cover products, including two methods, namely, result reliability evaluation and process reliability evaluation. Result reliability evaluation computes the reliability of land cover products using seven reliability indicators. Process reliability evaluation analyzes the reliability propagation in the data production process to obtain the reliability of land cover products. Fuzzy fault tree analysis is introduced and improved in the reliability analysis of a data production process. Research results show that the proposed reliability evaluation scheme is reasonable and can be applied to validate land cover products. Through the analysis of the seven indicators of result reliability evaluation, more information on land cover can be obtained for strategic decision-making and planning, compared with traditional accuracy assessment methods. Process reliability evaluation without the need for reference data can facilitate the validation and reflect the change trends of reliabilities to some extent.

  4. Reliability analysis of HVDC grid combined with power flow simulations

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yongtao; Langeland, Tore; Solvik, Johan [DNV AS, Hoevik (Norway); Stewart, Emma [DNV KEMA, Camino Ramon, CA (United States)

    2012-07-01

    Based on a DC grid power flow solver and the proposed GEIR, we carried out reliability analysis for a HVDC grid test system proposed by CIGRE working group B4-58, where the failure statistics are collected from literature survey. The proposed methodology is used to evaluate the impact of converter configuration on the overall reliability performance of the HVDC grid, where the symmetrical monopole configuration is compared with the bipole with metallic return wire configuration. The results quantify the improvement on reliability by using the later alternative. (orig.)

  5. Reliability payments to generation capacity in electricity markets

    International Nuclear Information System (INIS)

    Olsina, Fernando; Pringles, Rolando; Larisson, Carlos; Garcés, Francisco

    2014-01-01

    Electric power is a critical input to modern economies. Generation adequacy and security of supply in power systems running under competition are currently topics of high concern for consumers, regulators and governments. In a market setting, generation investments and adequacy can only be achieved by an appropriate regulatory framework that sets efficient remuneration to power capacity. Theoretically, energy-only electricity markets are efficient and no additional mechanism is needed. Nonetheless, the energy-only market design suffers from serious drawbacks. Therefore, jointly with the evolution of electricity markets, many remunerating mechanisms for generation capacity have been proposed. Explicit capacity payment was the first remunerating approach implemented and perhaps still the most applied. However, this price-based regulation has been applied no without severe difficulties and criticism. In this paper, a new reliability payment mechanism is envisioned. Capacity of each generating unit is paid according to its effective contribution to overall system reliability. The proposed scheme has many attractive features and preserves the theoretical efficiency properties of energy-only markets. Fairness, incentive compatibility, market power mitigation and settlement rules are investigated in this work. The article also examines the requirements for system data and models in order to implement the proposed capacity mechanism. A numerical example on a real hydrothermal system serves for illustrating the practicability of the proposed approach and the resulting reliability payments to the generation units. - Highlights: • A new approach for remunerating supply reliability provided by generation units is proposed. • The contribution of each generating unit to lessen power shortfalls is determined by simulations. • Efficiency, fairness and incentive compatibility of the proposed reliability payment are assessed

  6. Reliability-guided digital image correlation for image deformation measurement

    International Nuclear Information System (INIS)

    Pan Bing

    2009-01-01

    A universally applicable reliability-guided digital image correlation (DIC) method is proposed for reliable image deformation measurement. The zero-mean normalized cross correlation (ZNCC) coefficient is used to identify the reliability of the point computed. The correlation calculation begins with a seed point and is then guided by the ZNCC coefficient. That means the neighbors of the point with the highest ZNCC coefficient in a queue for computed points will be processed first. Thus the calculation path is always along the most reliable direction, and possible error propagation of the conventional DIC method can be avoided. The proposed novel DIC method is universally applicable to the images with shadows, discontinuous areas, and deformation discontinuity. Two image pairs were used to evaluate the performance of the proposed technique, and the successful results clearly demonstrate its robustness and effectiveness

  7. Pattern of secure bilateral transactions ensuring power economic dispatch in hybrid electricity markets

    International Nuclear Information System (INIS)

    Kumar, Ashwani; Gao, Wenzhong

    2009-01-01

    This paper proposes a new method for secure bilateral transactions determination ensuring economic power dispatch of the generators using new AC distribution factors for pool and bilateral coordinated markets. The new optimization problem considers simultaneous minimization of deviations from scheduled transactions and fuel cost of the generators in the network. The fuel cost has been obtained for hybrid market model and impact of different percentage of bilateral demand on fuel cost, generation share, and pattern of transactions has also been determined. The impact of optimally located unified power flow controller (UPFC) on the bilateral transactions, fuel cost and generation pattern has also been studied. The results have also been obtained for pool market model. The proposed technique has been applied on IEEE 24-bus reliability test system (RTS). (author)

  8. Reliability analysis in intelligent machines

    Science.gov (United States)

    Mcinroy, John E.; Saridis, George N.

    1990-01-01

    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  9. 75 FR 71625 - System Restoration Reliability Standards

    Science.gov (United States)

    2010-11-24

    ... to start operating and delivering electric power without assistance from the electric system... and system restoration and reporting following disturbances. \\3\\ North American Electric Reliability... Reliability Standards for the Bulk-Power System and determined that the proposed requirements are necessary to...

  10. A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE

    Directory of Open Access Journals (Sweden)

    GEE-YONG PARK

    2014-02-01

    Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.

  11. Reliability analysis of reactor inspection robot(RIROB)

    International Nuclear Information System (INIS)

    Eom, H. S.; Kim, J. H.; Lee, J. C.; Choi, Y. R.; Moon, S. S.

    2002-05-01

    This report describes the method and the result of the reliability analysis of RIROB developed in Korea Atomic Energy Research Institute. There are many classic techniques and models for the reliability analysis. These techniques and models have been used widely and approved in other industries such as aviation and nuclear industry. Though these techniques and models have been approved in real fields they are still insufficient for the complicated systems such RIROB which are composed of computer, networks, electronic parts, mechanical parts, and software. Particularly the application of these analysis techniques to digital and software parts of complicated systems is immature at this time thus expert judgement plays important role in evaluating the reliability of the systems at these days. In this report we proposed a method which combines diverse evidences relevant to the reliability to evaluate the reliability of complicated systems such as RIROB. The proposed method combines diverse evidences and performs inference in formal and in quantitative way by using the benefits of Bayesian Belief Nets (BBN)

  12. Reliability Evaluation of Bridges Based on Nonprobabilistic Response Surface Limit Method

    Directory of Open Access Journals (Sweden)

    Xuyong Chen

    2017-01-01

    Full Text Available Due to many uncertainties in nonprobabilistic reliability assessment of bridges, the limit state function is generally unknown. The traditional nonprobabilistic response surface method is a lengthy and oscillating iteration process and leads to difficultly solving the nonprobabilistic reliability index. This article proposes a nonprobabilistic response surface limit method based on the interval model. The intention of this method is to solve the upper and lower limits of the nonprobabilistic reliability index and to narrow the range of the nonprobabilistic reliability index. If the range of the reliability index reduces to an acceptable accuracy, the solution will be considered convergent, and the nonprobabilistic reliability index will be obtained. The case study indicates that using the proposed method can avoid oscillating iteration process, make iteration process stable and convergent, reduce iteration steps significantly, and improve computational efficiency and precision significantly compared with the traditional nonprobabilistic response surface method. Finally, the nonprobabilistic reliability evaluation process of bridge will be built through evaluating the reliability of one PC continuous rigid frame bridge with three spans using the proposed method, which appears to be more simple and reliable when lack of samples and parameters in the bridge nonprobabilistic reliability evaluation is present.

  13. Specialization Patterns

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh; Lawall, Julia Laetitia; Consel, Charles

    2000-01-01

    Design patterns offer many advantages for software development, but can introduce inefficiency into the final program. Program specialization can eliminate such overheads, but is most effective when targeted by the user to specific bottlenecks. Consequently, we propose that these concepts...... are complementary. Program specialization can optimize programs written using design patterns, and design patterns provide information about the program structure that can guide specialization. Concretely, we propose specialization patterns, which describe how to apply program specialization to optimize uses...... of design patterns. In this paper, we analyze the specialization opportunities provided by specific uses of design patterns. Based on the analysis of each design pattern, we define the associated specialization pattern. These specialization opportunities can be declared using the specialization classes...

  14. Investigating the Intersession Reliability of Dynamic Brain-State Properties.

    Science.gov (United States)

    Smith, Derek M; Zhao, Yrian; Keilholz, Shella D; Schumacher, Eric H

    2018-06-01

    Dynamic functional connectivity metrics have much to offer to the neuroscience of individual differences of cognition. Yet, despite the recent expansion in dynamic connectivity research, limited resources have been devoted to the study of the reliability of these connectivity measures. To address this, resting-state functional magnetic resonance imaging data from 100 Human Connectome Project subjects were compared across 2 scan days. Brain states (i.e., patterns of coactivity across regions) were identified by classifying each time frame using k means clustering. This was done with and without global signal regression (GSR). Multiple gauges of reliability indicated consistency in the brain-state properties across days and GSR attenuated the reliability of the brain states. Changes in the brain-state properties across the course of the scan were investigated as well. The results demonstrate that summary metrics describing the clustering of individual time frames have adequate test/retest reliability, and thus, these patterns of brain activation may hold promise for individual-difference research.

  15. 78 FR 27113 - Version 5 Critical Infrastructure Protection Reliability Standards

    Science.gov (United States)

    2013-05-09

    ... approve certain reliability standards proposed by the North American Electric Reliability Corporation... Infrastructure Protection Reliability Standards, 143 FERC ] 61,055 (2013). This errata notice serves to correct P... Commission 18 CFR Part 40 [Docket No. RM13-5-000] Version 5 Critical Infrastructure Protection Reliability...

  16. Flexible, reliable software using patterns and agile development

    CERN Document Server

    Christensen, Henrik B

    2010-01-01

    …This book brings together a careful selection of topics that are relevant, indeed crucial, for developing good quality software with a carefully designed pedagogy that leads the reader through an experience of active learning. The emphasis in the content is on practical goals-how to construct reliable and flexible software systems-covering many topics that every software engineer should have studied. The emphasis in the method is on providing a practical context, hands-on projects, and guidance on process. … The text discusses not only what the end product should be like, but also how to get

  17. Reliability of self-rated tinnitus distress and association with psychological symptom patterns.

    Science.gov (United States)

    Hiller, W; Goebel, G; Rief, W

    1994-05-01

    Psychological complaints were investigated in two samples of 60 and 138 in-patients suffering from chronic tinnitus. We administered the Tinnitus Questionnaire (TQ), a 52-item self-rating scale which differentiates between dimensions of emotional and cognitive distress, intrusiveness, auditory perceptual difficulties, sleep disturbances and somatic complaints. The test-retest reliability was .94 for the TQ global score and between .86 and .93 for subscales. Three independent analyses were conducted to estimate the split-half reliability (internal consistency) which was only slightly lower than the test-retest values for scales with a relatively small number of items. Reliability was sufficient also on the level of single items. Low correlation between the TQ and the Hopkins Symptom Checklist (SCL-90-R) indicate a distinct quality of tinnitus-related and general psychological disturbances.

  18. FUNDAMENTALS OF RELIABILITY OF ELECTRIC POWER SYSTEM AND EQUIPMENT

    OpenAIRE

    Engr. Anumaka; Michael Chukwukadibia

    2011-01-01

    Today, the electric power system consists of complex interconnected network which are prone to different problems that militates against the reliability of the power system. Inadequate reliability in the power system causes problems such as high failure rate of power system installations and consumer equipment, transient and intransient faults, symmetrical faults etc. This paper provides an extensive review of the powers system and equipment reliability and related failure patterns in equipment.

  19. A Hierarchical Energy Efficient Reliable Transport Protocol for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Prabhudutta Mohanty

    2014-12-01

    Full Text Available The two important requirements for many Wireless Senor Networks (WSNs are prolonged network lifetime and end-to-end reliability. The sensor nodes consume more energy during data transmission than the data sensing. In WSN, the redundant data increase the energy consumption, latency and reduce reliability during data transmission. Therefore, it is important to support energy efficient reliable data transport in WSNs. In this paper, we present a Hierarchical Energy Efficient Reliable Transport Protocol (HEERTP for the data transmission within the WSN. This protocol maximises the network lifetime by controlling the redundant data transmission with the co-ordination of Base Station (BS. The proposed protocol also achieves end-to-end reliability using a hop-by-hop acknowledgement scheme. We evaluate the performance of the proposed protocol through simulation. The simulation results reveal that our proposed protocol achieves better performance in terms of energy efficiency, latency and reliability than the existing protocols.

  20. Reliability of poly 3,4-ethylenedioxythiophene strain gauge

    DEFF Research Database (Denmark)

    Mateiu, Ramona Valentina; Lillemose, Michael; Hansen, Thomas Steen

    2007-01-01

    We report on the experimentally observed reliability of the piezoresistive effect in strained poly 3,4-ethylenedioxythiophene (PEDT). PEDT is an intrinsic conductive polymer which can be patterned by conventional Cleanroom processing, and thus presents a promising material for all-polymer Microsy......We report on the experimentally observed reliability of the piezoresistive effect in strained poly 3,4-ethylenedioxythiophene (PEDT). PEDT is an intrinsic conductive polymer which can be patterned by conventional Cleanroom processing, and thus presents a promising material for all......-polymer Microsystems. The measurements are made on microfabricated test chips with PEDT resistors patterned by conventional UV-lithography and reactive ion etching (RIE). We determine a gauge factor of 3.41 ± 0.42 for the strained PEDT and we see an increase in resistivity from 1.98 · 104 X m to 2.22 · 104 X m when...

  1. A study on the enhancement of the reliability in gravure offset roll printing with blanket swelling control

    International Nuclear Information System (INIS)

    Kim, Ga Eul; Woo, Kyoohee; Kang, Dongwoo; Jang, Yunseok; Lee, Taik-Min; Kwon, Sin; Choi, Young-Man; Lee, Moon G

    2016-01-01

    In roll-offset printing (patterning) technology with a PDMS blanket as a transfer medium, one of the major reliability issues is the occurrence of swelling, which involves absorption of the ink solvent in the printing blanket with repeated printing. This study developed a method to resolve blanket swelling in gravure offset roll printing and performed experiments for performance verification. The physical phenomena of mass and heat transfer were applied to fabricate a device based on convection drying. The proposed device managed to effectively control blanket swelling through drying by blowing air and additional temperature control. The experiments verified that printing quality (in particular the variation of the width of printed patterns) was maintained over 500 continuous printing. (paper)

  2. Design and recognition of artificial landmarks for reliable indoor self-localization of mobile robots

    Directory of Open Access Journals (Sweden)

    Xu Zhong

    2017-02-01

    Full Text Available This article presents a self-localization scheme for indoor mobile robot navigation based on reliable design and recognition of artificial visual landmarks. Each landmark is patterned with a set of concentric circular rings in black and white, which reliably encodes the landmark’s identity under environmental illumination. A mobile robot in navigation uses an onboard camera to capture landmarks in the environment. The landmarks in an image are detected and identified using a bilayer recognition algorithm: A global recognition process initially extracts candidate landmark regions across the whole image and tries to identify enough landmarks; if necessary, a local recognition process locally enhances those unidentified regions of interest influenced by illumination and incompleteness and reidentifies them. The recognized landmarks are used to estimate the position and orientation of the onboard camera in the environment, based on the geometric relationship between the image and environmental frames. The experiments carried out in a real indoor environment show high robustness of the proposed landmark design and recognition scheme to the illumination condition, which leads to reliable and accurate mobile robot localization.

  3. Reliability of Circumplex Axes

    Directory of Open Access Journals (Sweden)

    Micha Strack

    2013-06-01

    Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.

  4. Optimization of Reliability Centered Maintenance Bassed on Maintenance Costs and Reliability with Consideration of Location of Components

    Directory of Open Access Journals (Sweden)

    Mahdi Karbasian

    2011-03-01

    Full Text Available The reliability of designing systems such as electrical and electronic circuits, power generation/ distribution networks and mechanical systems, in which the failure of a component may cause the whole system failure, and even the reliability of cellular manufacturing systems that their machines are connected to as series are critically important. So far approaches for improving the reliability of these systems have been mainly based on the enhancement of inherent reliability of any system component or increasing system reliability based on maintenance strategies. Also in some of the resources, only the influence of the location of systems' components on reliability is studied. Therefore, it seems other approaches have been rarely applied. In this paper, a multi criteria model has been proposed to perform a balance among a system's reliability, location costs, and its system maintenance. Finally, a numerical example has been presented and solved by the Lingo software.

  5. Reliability of thermal interface materials: A review

    International Nuclear Information System (INIS)

    Due, Jens; Robinson, Anthony J.

    2013-01-01

    Thermal interface materials (TIMs) are used extensively to improve thermal conduction across two mating parts. They are particularly crucial in electronics thermal management since excessive junction-to-ambient thermal resistances can cause elevated temperatures which can negatively influence device performance and reliability. Of particular interest to electronic package designers is the thermal resistance of the TIM layer at the end of its design life. Estimations of this allow the package to be designed to perform adequately over its entire useful life. To this end, TIM reliability studies have been performed using accelerated stress tests. This paper reviews the body of work which has been performed on TIM reliability. It focuses on the various test methodologies with commentary on the results which have been obtained for the different TIM materials. Based on the information available in the open literature, a test procedure is proposed for TIM selection based on beginning and end of life performance. - Highlights: ► This paper reviews the body of work which has been performed on TIM reliability. ► Test methodologies for reliability testing are outlined. ► Reliability results for the different TIM materials are discussed. ► A test procedure is proposed for TIM selection BOLife and EOLife performance.

  6. Reliability analysis of reactor pressure vessel intensity

    International Nuclear Information System (INIS)

    Zheng Liangang; Lu Yongbo

    2012-01-01

    This paper performs the reliability analysis of reactor pressure vessel (RPV) with ANSYS. The analysis method include direct Monte Carlo Simulation method, Latin Hypercube Sampling, central composite design and Box-Behnken Matrix design. The RPV integrity reliability under given input condition is proposed. The result shows that the effects on the RPV base material reliability are internal press, allowable basic stress and elasticity modulus of base material in descending order, and the effects on the bolt reliability are allowable basic stress of bolt material, preload of bolt and internal press in descending order. (authors)

  7. Fuzzy QFD for supply chain management with reliability consideration

    International Nuclear Information System (INIS)

    Sohn, So Young; Choi, In Su

    2001-01-01

    Although many products are made through several tiers of supply chains, a systematic way of handling reliability issues in a various product planning stage has drawn attention, only recently, in the context of supply chain management (SCM). The main objective of this paper is to develop a fuzzy quality function deployment (QFD) model in order to convey fuzzy relationship between customers needs and design specification for reliability in the context of SCM. A fuzzy multi criteria decision-making procedure is proposed and is applied to find a set of optimal solution with respect to the performance of the reliability test needed in CRT design. It is expected that the proposed approach can make significant contributions on the following areas: effectively communicating with technical personnel and users; developing relatively error-free reliability review system; and creating consistent and complete documentation for design for reliability

  8. RELIABILITY MODELING BASED ON INCOMPLETE DATA: OIL PUMP APPLICATION

    Directory of Open Access Journals (Sweden)

    Ahmed HAFAIFA

    2014-07-01

    Full Text Available The reliability analysis for industrial maintenance is now increasingly demanded by the industrialists in the world. Indeed, the modern manufacturing facilities are equipped by data acquisition and monitoring system, these systems generates a large volume of data. These data can be used to infer future decisions affecting the health facilities. These data can be used to infer future decisions affecting the state of the exploited equipment. However, in most practical cases the data used in reliability modelling are incomplete or not reliable. In this context, to analyze the reliability of an oil pump, this work proposes to examine and treat the incomplete, incorrect or aberrant data to the reliability modeling of an oil pump. The objective of this paper is to propose a suitable methodology for replacing the incomplete data using a regression method.

  9. Fuzzy QFD for supply chain management with reliability consideration

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, So Young; Choi, In Su

    2001-06-01

    Although many products are made through several tiers of supply chains, a systematic way of handling reliability issues in a various product planning stage has drawn attention, only recently, in the context of supply chain management (SCM). The main objective of this paper is to develop a fuzzy quality function deployment (QFD) model in order to convey fuzzy relationship between customers needs and design specification for reliability in the context of SCM. A fuzzy multi criteria decision-making procedure is proposed and is applied to find a set of optimal solution with respect to the performance of the reliability test needed in CRT design. It is expected that the proposed approach can make significant contributions on the following areas: effectively communicating with technical personnel and users; developing relatively error-free reliability review system; and creating consistent and complete documentation for design for reliability.

  10. Reliability analysis for new technology-based transmitters

    Energy Technology Data Exchange (ETDEWEB)

    Brissaud, Florent, E-mail: florent.brissaud.2007@utt.f [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Barros, Anne; Berenguer, Christophe [Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and STMR UMR CNRS 6279, 12 rue Marie Curie, BP 2060, 10010 Troyes cedex (France); Charpentier, Dominique [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France)

    2011-02-15

    The reliability analysis of new technology-based transmitters has to deal with specific issues: various interactions between both material elements and functions, undefined behaviours under faulty conditions, several transmitted data, and little reliability feedback. To handle these particularities, a '3-step' model is proposed, based on goal tree-success tree (GTST) approaches to represent both the functional and material aspects, and includes the faults and failures as a third part for supporting reliability analyses. The behavioural aspects are provided by relationship matrices, also denoted master logic diagrams (MLD), with stochastic values which represent direct relationships between system elements. Relationship analyses are then proposed to assess the effect of any fault or failure on any material element or function. Taking these relationships into account, the probabilities of malfunction and failure modes are evaluated according to time. Furthermore, uncertainty analyses tend to show that even if the input data and system behaviour are not well known, these previous results can be obtained in a relatively precise way. An illustration is provided by a case study on an infrared gas transmitter. These properties make the proposed model and corresponding reliability analyses especially suitable for intelligent transmitters (or 'smart sensors').

  11. Chip-Level Electromigration Reliability for Cu Interconnects

    International Nuclear Information System (INIS)

    Gall, M.; Oh, C.; Grinshpon, A.; Zolotov, V.; Panda, R.; Demircan, E.; Mueller, J.; Justison, P.; Ramakrishna, K.; Thrasher, S.; Hernandez, R.; Herrick, M.; Fox, R.; Boeck, B.; Kawasaki, H.; Haznedar, H.; Ku, P.

    2004-01-01

    Even after the successful introduction of Cu-based metallization, the electromigration (EM) failure risk has remained one of the most important reliability concerns for most advanced process technologies. Ever increasing operating current densities and the introduction of low-k materials in the backend process scheme are some of the issues that threaten reliable, long-term operation at elevated temperatures. The traditional method of verifying EM reliability only through current density limit checks is proving to be inadequate in general, or quite expensive at the best. A Statistical EM Budgeting (SEB) methodology has been proposed to assess more realistic chip-level EM reliability from the complex statistical distribution of currents in a chip. To be valuable, this approach requires accurate estimation of currents for all interconnect segments in a chip. However, no efficient technique to manage the complexity of such a task for very large chip designs is known. We present an efficient method to estimate currents exhaustively for all interconnects in a chip. The proposed method uses pre-characterization of cells and macros, and steps to identify and filter out symmetrically bi-directional interconnects. We illustrate the strength of the proposed approach using a high-performance microprocessor design for embedded applications as a case study

  12. A Simple and Reliable Health Monitoring System For Shoulder Health: Proposal

    Science.gov (United States)

    Lee, Yann-Long

    2014-01-01

    Background The current health care system is complex and inefficient. A simple and reliable health monitoring system that can help patients perform medical self-diagnosis is seldom readily available. Because the medical system is vast and complex, it has hampered or delayed patients in seeking medical advice or treatment in a timely manner, which may potentially affect the patient’s chances of recovery, especially those with severe sicknesses such as cancer, and heart disease. Objective The purpose of this paper is to propose a methodology in designing a simple, low cost, Internet-based health-screening platform. Methods This health-screening platform will enable patients to perform medical self-diagnosis over the Internet. Historical data has shown the importance of early detection to ensure patients receive proper treatment and speedy recovery. Results The platform is designed with special emphasis on the user interface. Standard Web-based user-interface design is adopted so the user feels ease to operate in a familiar Web environment. In addition, graphics such as charts and graphs are used generously to help users visualize and understand the result of the diagnostic. The system is developed using hypertext preprocessor (PHP) programming language. One important feature of this system platform is that it is built to be a stand-alone platform, which tends to have better user privacy security. The prototype system platform was developed by the National Cheng Kung University Ergonomic and Design Laboratory. Conclusions The completed prototype of this system platform was submitted to the Taiwan Medical Institute for evaluation. The evaluation of 120 participants showed that this platform system is a highly effective tool in health-screening applications, and has great potential for improving the medical care quality for the general public. PMID:24571980

  13. A simple and reliable health monitoring system for shoulder health: proposal.

    Science.gov (United States)

    Liu, Shuo-Fang; Lee, Yann-Long

    2014-02-26

    The current health care system is complex and inefficient. A simple and reliable health monitoring system that can help patients perform medical self-diagnosis is seldom readily available. Because the medical system is vast and complex, it has hampered or delayed patients in seeking medical advice or treatment in a timely manner, which may potentially affect the patient's chances of recovery, especially those with severe sicknesses such as cancer, and heart disease. The purpose of this paper is to propose a methodology in designing a simple, low cost, Internet-based health-screening platform. This health-screening platform will enable patients to perform medical self-diagnosis over the Internet. Historical data has shown the importance of early detection to ensure patients receive proper treatment and speedy recovery. The platform is designed with special emphasis on the user interface. Standard Web-based user-interface design is adopted so the user feels ease to operate in a familiar Web environment. In addition, graphics such as charts and graphs are used generously to help users visualize and understand the result of the diagnostic. The system is developed using hypertext preprocessor (PHP) programming language. One important feature of this system platform is that it is built to be a stand-alone platform, which tends to have better user privacy security. The prototype system platform was developed by the National Cheng Kung University Ergonomic and Design Laboratory. The completed prototype of this system platform was submitted to the Taiwan Medical Institute for evaluation. The evaluation of 120 participants showed that this platform system is a highly effective tool in health-screening applications, and has great potential for improving the medical care quality for the general public.

  14. Multidisciplinary Inverse Reliability Analysis Based on Collaborative Optimization with Combination of Linear Approximations

    Directory of Open Access Journals (Sweden)

    Xin-Jia Meng

    2015-01-01

    Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.

  15. Reliability-oriented energy storage sizing in wind power systems

    DEFF Research Database (Denmark)

    Qin, Zian; Liserre, Marco; Blaabjerg, Frede

    2014-01-01

    Energy storage can be used to suppress the power fluctuations in wind power systems, and thereby reduce the thermal excursion and improve the reliability. Since the cost of the energy storage in large power application is high, it is crucial to have a better understanding of the relationship...... between the size of the energy storage and the reliability benefit it can generate. Therefore, a reliability-oriented energy storage sizing approach is proposed for the wind power systems, where the power, energy, cost and the control strategy of the energy storage are all taken into account....... With the proposed approach, the computational effort is reduced and the impact of the energy storage system on the reliability of the wind power converter can be quantified....

  16. Operator reliability analysis during NPP small break LOCA

    International Nuclear Information System (INIS)

    Zhang Jiong; Chen Shenglin

    1990-01-01

    To assess the human factor characteristic of a NPP main control room (MCR) design, the MCR operator reliability during a small break LOCA is analyzed, and some approaches for improving the MCR operator reliability are proposed based on the analyzing results

  17. Designing reliable supply chain network with disruption risk

    Directory of Open Access Journals (Sweden)

    Ali Bozorgi Amiri

    2013-01-01

    Full Text Available Although supply chains disruptions rarely occur, their negative effects are prolonged and severe. In this paper, we propose a reliable capacitated supply chain network design (RSCND model by considering random disruptions in both distribution centers and suppliers. The proposed model determines the optimal location of distribution centers (DC with the highest reliability, the best plan to assign customers to opened DCs and assigns opened DCs to suitable suppliers with lowest transportation cost. In this study, random disruption occurs at the location, capacity of the distribution centers (DCs and suppliers. It is assumed that a disrupted DC and a disrupted supplier may lose a portion of their capacities, and the rest of the disrupted DC's demand can be supplied by other DCs. In addition, we consider shortage in DCs, which can occur in either normal or disruption conditions and DCs, can support each other in such circumstances. Unlike other studies in the extent of literature, we use new approach to model the reliability of DCs; we consider a range of reliability instead of using binary variables. In order to solve the proposed model for real-world instances, a Non-dominated Sorting Genetic Algorithm-II (NSGA-II is applied. Preliminary results of testing the proposed model of this paper on several problems with different sizes provide seem to be promising.

  18. Prediction of software operational reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1995-01-01

    A number of software reliability models have been developed to estimate and to predict software reliability. However, there are no established standard models to quantify software reliability. Most models estimate the quality of software in reliability figures such as remaining faults, failure rate, or mean time to next failure at the testing phase, and they consider them ultimate indicators of software reliability. Experience shows that there is a large gap between predicted reliability during development and reliability measured during operation, which means that predicted reliability, or so-called test reliability, is not operational reliability. Customers prefer operational reliability to test reliability. In this study, we propose a method that predicts operational reliability rather than test reliability by introducing the testing environment factor that quantifies the changes in environments

  19. Reliability analysis of component of affination centrifugal 1 machine by using reliability engineering

    Science.gov (United States)

    Sembiring, N.; Ginting, E.; Darnello, T.

    2017-12-01

    Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.

  20. Reliability Modeling of Double Beam Bridge Crane

    Science.gov (United States)

    Han, Zhu; Tong, Yifei; Luan, Jiahui; Xiangdong, Li

    2018-05-01

    This paper briefly described the structure of double beam bridge crane and the basic parameters of double beam bridge crane are defined. According to the structure and system division of double beam bridge crane, the reliability architecture of double beam bridge crane system is proposed, and the reliability mathematical model is constructed.

  1. Reliability of a four-column classification for tibial plateau fractures.

    Science.gov (United States)

    Martínez-Rondanelli, Alfredo; Escobar-González, Sara Sofía; Henao-Alzate, Alejandro; Martínez-Cano, Juan Pablo

    2017-09-01

    A four-column classification system offers a different way of evaluating tibial plateau fractures. The aim of this study is to compare the intra-observer and inter-observer reliability between four-column and classic classifications. This is a reliability study, which included patients presenting with tibial plateau fractures between January 2013 and September 2015 in a level-1 trauma centre. Four orthopaedic surgeons blindly classified each fracture according to four different classifications: AO, Schatzker, Duparc and four-column. Kappa, intra-observer and inter-observer concordance were calculated for the reliability analysis. Forty-nine patients were included. The mean age was 39 ± 14.2 years, with no gender predominance (men: 51%; women: 49%), and 67% of the fractures included at least one of the posterior columns. The intra-observer and inter-observer concordance were calculated for each classification: four-column (84%/79%), Schatzker (60%/71%), AO (50%/59%) and Duparc (48%/58%), with a statistically significant difference among them (p = 0.001/p = 0.003). Kappa coefficient for intr-aobserver and inter-observer evaluations: Schatzker 0.48/0.39, four-column 0.61/0.34, Duparc 0.37/0.23, and AO 0.34/0.11. The proposed four-column classification showed the highest intra and inter-observer agreement. When taking into account the agreement that occurs by chance, Schatzker classification showed the highest inter-observer kappa, but again the four-column had the highest intra-observer kappa value. The proposed classification is a more inclusive classification for the posteromedial and posterolateral fractures. We suggest, therefore, that it be used in addition to one of the classic classifications in order to better understand the fracture pattern, as it allows more attention to be paid to the posterior columns, it improves the surgical planning and allows the surgical approach to be chosen more accurately.

  2. 78 FR 76986 - Version 5 Critical Infrastructure Protection Reliability Standards

    Science.gov (United States)

    2013-12-20

    ...; Order No. 791] Version 5 Critical Infrastructure Protection Reliability Standards AGENCY: Federal Energy... 72755). The regulations approved certain reliability standards proposed by the North American Electric... Infrastructure Protection Reliability Standards, 145 FERC ] 61,160 (2013). This errata notice serves to correct P...

  3. Accelerator reliability workshop

    International Nuclear Information System (INIS)

    Hardy, L.; Duru, Ph.; Koch, J.M.; Revol, J.L.; Van Vaerenbergh, P.; Volpe, A.M.; Clugnet, K.; Dely, A.; Goodhew, D.

    2002-01-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop

  4. Accelerator reliability workshop

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, L; Duru, Ph; Koch, J M; Revol, J L; Van Vaerenbergh, P; Volpe, A M; Clugnet, K; Dely, A; Goodhew, D

    2002-07-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop.

  5. power system reliability in supplying nuclear reactors

    International Nuclear Information System (INIS)

    Gad, M.M.M.

    2007-01-01

    this thesis presents a simple technique for deducing minimal cut set (MCS) from the defined minimal path set (MPS) of generic distribution system and this technique have been used to evaluate the basic reliability indices of Egypt's second research reactor (ETRR-2) electrical distribution network. the alternative system configurations are then studied to evaluate their impact on service reliability. the proposed MCS approach considers both sustained and temporary outage. the temporary outage constitutes an important parameter in characterizing the system reliability indices for critical load point in distribution system. it is also consider the power quality impact on the reliability indices

  6. Connectivity-Based Reliable Multicast MAC Protocol for IEEE 802.11 Wireless LANs

    Directory of Open Access Journals (Sweden)

    Woo-Yong Choi

    2009-01-01

    Full Text Available We propose the efficient reliable multicast MAC protocol based on the connectivity information among the recipients. Enhancing the BMMM (Batch Mode Multicast MAC protocol, the reliable multicast MAC protocol significantly reduces the RAK (Request for ACK frame transmissions in a reasonable computational time and enhances the MAC performance. By the analytical performance analysis, the throughputs of the BMMM protocol and our proposed MAC protocol are derived. Numerical examples show that our proposed MAC protocol increases the reliable multicast MAC performance for IEEE 802.11 wireless LANs.

  7. Generation reliability assessment in oligopoly power market using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Haroonabadi, H.; Haghifam, M.R.

    2007-01-01

    This paper addressed issues regarding power generation reliability assessment (HLI) in deregulated power pool markets. Most HLI reliability evaluation methods are based on the loss of load (LOLE) approach which is among the most suitable indices to describe the level of generation reliability. LOLE refers to the time in which load is greater than the amount of available generation. While most reliability assessments deal only with power system constraints, this study considered HLI reliability assessment in an oligopoly power market using Monte Carlo simulation (MCS). It evaluated the sensitivity of the reliability index to different reserve margins and future margins. The reliability index was determined by intersecting the offer and demand curves of power plants and comparing them to other parameters. The paper described the fundamentals of an oligopoly power pool market and proposed an algorithm for HLI reliability assessment for such a market. The proposed method was assessed on the IEEE-Reliability Test System with satisfactory results. In all cases, generation reliability indices were evaluated with different reserve margins and various load levels. 19 refs., 7 figs., 1 appendix

  8. Finite element reliability analysis of fatigue life

    International Nuclear Information System (INIS)

    Harkness, H.H.; Belytschko, T.; Liu, W.K.

    1992-01-01

    Fatigue reliability is addressed by the first-order reliability method combined with a finite element method. Two-dimensional finite element models of components with cracks in mode I are considered with crack growth treated by the Paris law. Probability density functions of the variables affecting fatigue are proposed to reflect a setting where nondestructive evaluation is used, and the Rosenblatt transformation is employed to treat non-Gaussian random variables. Comparisons of the first-order reliability results and Monte Carlo simulations suggest that the accuracy of the first-order reliability method is quite good in this setting. Results show that the upper portion of the initial crack length probability density function is crucial to reliability, which suggests that if nondestructive evaluation is used, the probability of detection curve plays a key role in reliability. (orig.)

  9. Inference on the reliability of Weibull distribution with multiply Type-I censored data

    International Nuclear Information System (INIS)

    Jia, Xiang; Wang, Dong; Jiang, Ping; Guo, Bo

    2016-01-01

    In this paper, we focus on the reliability of Weibull distribution under multiply Type-I censoring, which is a general form of Type-I censoring. In multiply Type-I censoring in this study, all units in the life testing experiment are terminated at different times. Reliability estimation with the maximum likelihood estimate of Weibull parameters is conducted. With the delta method and Fisher information, we propose a confidence interval for reliability and compare it with the bias-corrected and accelerated bootstrap confidence interval. Furthermore, a scenario involving a few expert judgments of reliability is considered. A method is developed to generate extended estimations of reliability according to the original judgments and transform them to estimations of Weibull parameters. With Bayes theory and the Monte Carlo Markov Chain method, a posterior sample is obtained to compute the Bayes estimate and credible interval for reliability. Monte Carlo simulation demonstrates that the proposed confidence interval outperforms the bootstrap one. The Bayes estimate and credible interval for reliability are both satisfactory. Finally, a real example is analyzed to illustrate the application of the proposed methods. - Highlights: • We focus on reliability of Weibull distribution under multiply Type-I censoring. • The proposed confidence interval for the reliability is superior after comparison. • The Bayes estimates with a few expert judgements on reliability are satisfactory. • We specify the cases where the MLEs do not exist and present methods to remedy it. • The distribution of estimate of reliability should be used for accurate estimate.

  10. Test Reliability at the Individual Level

    Science.gov (United States)

    Hu, Yueqin; Nesselroade, John R.; Erbacher, Monica K.; Boker, Steven M.; Burt, S. Alexandra; Keel, Pamela K.; Neale, Michael C.; Sisk, Cheryl L.; Klump, Kelly

    2016-01-01

    Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coefficients. Then in an empirical study, where forty-five females were measured daily on the Positive and Negative Affect Schedule (PANAS) for 45 consecutive days, separate estimates of reliability were generated for each person. Results showed that reliability estimates of the PANAS varied substantially from person to person. The methods provided in this article apply to tests measuring changeable attributes and require repeated measures across time on each individual. This article also provides a set of parallel forms of PANAS. PMID:28936107

  11. 78 FR 72755 - Version 5 Critical Infrastructure Protection Reliability Standards

    Science.gov (United States)

    2013-12-03

    ... impact on Bulk-Power System reliability. However, the Commission is concerned that the proposed language... focus on the reliability and security of the Bulk Power System.'' \\26\\ Accordingly, NERC requests that...-002-5 through CIP-011-1, submitted by the North American Electric Reliability Corporation (NERC), the...

  12. Optimization of reliability centered predictive maintenance scheme for inertial navigation system

    International Nuclear Information System (INIS)

    Jiang, Xiuhong; Duan, Fuhai; Tian, Heng; Wei, Xuedong

    2015-01-01

    The goal of this study is to propose a reliability centered predictive maintenance scheme for a complex structure Inertial Navigation System (INS) with several redundant components. GO Methodology is applied to build the INS reliability analysis model—GO chart. Components Remaining Useful Life (RUL) and system reliability are updated dynamically based on the combination of components lifetime distribution function, stress samples, and the system GO chart. Considering the redundant design in INS, maintenance time is based not only on components RUL, but also (and mainly) on the timing of when system reliability fails to meet the set threshold. The definition of components maintenance priority balances three factors: components importance to system, risk degree, and detection difficulty. Maintenance Priority Number (MPN) is introduced, which may provide quantitative maintenance priority results for all components. A maintenance unit time cost model is built based on components MPN, components RUL predictive model and maintenance intervals for the optimization of maintenance scope. The proposed scheme can be applied to serve as the reference for INS maintenance. Finally, three numerical examples prove the proposed predictive maintenance scheme is feasible and effective. - Highlights: • A dynamic PdM with a rolling horizon is proposed for INS with redundant components. • GO Methodology is applied to build the system reliability analysis model. • A concept of MPN is proposed to quantify the maintenance sequence of components. • An optimization model is built to select the optimal group of maintenance components. • The optimization goal is minimizing the cost of maintaining system reliability

  13. Incorporating travel time reliability into the Highway Capacity Manual.

    Science.gov (United States)

    2014-01-01

    This final report documents the activities performed during SHRP 2 Reliability Project L08: Incorporating Travel Time Reliability into the Highway Capacity Manual. It serves as a supplement to the proposed chapters for incorporating travel time relia...

  14. Reliability of diagnostic imaging techniques in suspected acute appendicitis: proposed diagnostic protocol

    International Nuclear Information System (INIS)

    Cura del, J. L.; Oleaga, L.; Grande, D.; Vela, A. C.; Ibanez, A. M.

    2001-01-01

    To study the utility of ultrasound and computed tomography (CT) in case of suspected appendicitis. To determine the diagnostic yield in terms of different clinical contexts and patient characteristics. to assess the costs and benefits of introducing these techniques and propose a protocol for their use. Negative appendectomies, complications and length of hospital stay in a group of 152 patients with suspected appendicitis who underwent ultrasound and CT were compared with those of 180 patients who underwent appendectomy during the same time period, but had not been selected for the first group: these patients costs for each group were calculated. In the first group, the diagnostic value of the clinical signs was also evaluated. The reliability of the clinical signs was limited, while the results with ultrasound and CT were excellent. The incidence of negative appendectomy was 9.6% in the study group and 12.2% in the control group. Moreover, there were fewer complications and a shorter hospital stay in the first group. Among men, however, the rate of negative appendectomy was lower in the control group. The cost of using ultrasound and CT in the management of appendicitis was only slightly higher than that of the control group. Although ultrasound and CT are not necessary in cases in which the probability of appendicitis is low or in men presenting clear clinical evidence, the use of these techniques is indicated in the remaining cases in which appendicitis is suspected. In children, ultrasound is the technique of choice. In all other patients, if negative results are obtained with one of the two techniques, the other should be performed. (Author) 49 refs

  15. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    Directory of Open Access Journals (Sweden)

    Kaijuan Yuan

    2016-01-01

    Full Text Available Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods.

  16. Human reliability in complex systems: an overview

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1976-07-01

    A detailed analysis is presented of the main conceptual background underlying the areas of human reliability and human error. The concept of error is examined and generalized to that of human reliability, and some of the practical and methodological difficulties of reconciling the different standpoints of the human factors specialist and the engineer discussed. Following a survey of general reviews available on human reliability, quantitative techniques for prediction of human reliability are considered. An in-depth critical analysis of the various quantitative methods is then presented, together with the data bank requirements for human reliability prediction. Reliability considerations in process control and nuclear plant, and also areas of design, maintenance, testing and emergency situations are discussed. The effects of stress on human reliability are analysed and methods of minimizing these effects discussed. Finally, a summary is presented and proposals for further research are set out. (author)

  17. A computational Bayesian approach to dependency assessment in system reliability

    International Nuclear Information System (INIS)

    Yontay, Petek; Pan, Rong

    2016-01-01

    Due to the increasing complexity of engineered products, it is of great importance to develop a tool to assess reliability dependencies among components and systems under the uncertainty of system reliability structure. In this paper, a Bayesian network approach is proposed for evaluating the conditional probability of failure within a complex system, using a multilevel system configuration. Coupling with Bayesian inference, the posterior distributions of these conditional probabilities can be estimated by combining failure information and expert opinions at both system and component levels. Three data scenarios are considered in this study, and they demonstrate that, with the quantification of the stochastic relationship of reliability within a system, the dependency structure in system reliability can be gradually revealed by the data collected at different system levels. - Highlights: • A Bayesian network representation of system reliability is presented. • Bayesian inference methods for assessing dependencies in system reliability are developed. • Complete and incomplete data scenarios are discussed. • The proposed approach is able to integrate reliability information from multiple sources at multiple levels of the system.

  18. 75 FR 66038 - Planning Resource Adequacy Assessment Reliability Standard

    Science.gov (United States)

    2010-10-27

    ..., providing a common framework for resource adequacy analysis, assessment, and documentation) effectively and...://www.nerc.com/commondocs.php?cd=2 . D. Proposed Effective Date 24. Proposed regional Reliability...

  19. Novel approach for evaluation of service reliability for electricity customers

    Institute of Scientific and Technical Information of China (English)

    JIANG; John; N

    2009-01-01

    Understanding reliability value for electricity customer is important to market-based reliability management. This paper proposes a novel approach to evaluate the reliability for electricity customers by using indifference curve between economic compensation for power interruption and service reliability of electricity. Indifference curve is formed by calculating different planning schemes of network expansion for different reliability requirements of customers, which reveals the information about economic values for different reliability levels for electricity customers, so that the reliability based on market supply demand mechanism can be established and economic signals can be provided for reliability management and enhancement.

  20. Study of redundant Models in reliability prediction of HXMT's HES

    International Nuclear Information System (INIS)

    Wang Jinming; Liu Congzhan; Zhang Zhi; Ji Jianfeng

    2010-01-01

    Two redundant equipment structures of HXMT's HES are proposed firstly, the block backup and dual system cold-redundancy. Then prediction of the reliability is made by using parts count method. Research of comparison and analysis is also performed on the two proposals. A conclusion is drawn that a higher reliability and longer service life could be offered by taking a redundant equipment structure of block backup. (authors)

  1. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  2. Reliability-based Assessment of Fatigue Life for Bridges

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard

    2012-01-01

    The reliability level for bridges is discussed based on a comparison of the reliability levels proposed and used by e.g. JCSS, ISO, NKB and Eurocodes. The influence of reserve capacity by which failure of a specific detail does not lead to structural collapse is investigated. The results show...

  3. Relevance feature selection of modal frequency-ambient condition pattern recognition in structural health assessment for reinforced concrete buildings

    Directory of Open Access Journals (Sweden)

    He-Qing Mu

    2016-08-01

    Full Text Available Modal frequency is an important indicator for structural health assessment. Previous studies have shown that this indicator is substantially affected by the fluctuation of ambient conditions, such as temperature and humidity. Therefore, recognizing the pattern between modal frequency and ambient conditions is necessary for reliable long-term structural health assessment. In this article, a novel machine-learning algorithm is proposed to automatically select relevance features in modal frequency-ambient condition pattern recognition based on structural dynamic response and ambient condition measurement. In contrast to the traditional feature selection approaches by examining a large number of combinations of extracted features, the proposed algorithm conducts continuous relevance feature selection by introducing a sophisticated hyperparameterization on the weight parameter vector controlling the relevancy of different features in the prediction model. The proposed algorithm is then utilized for structural health assessment for a reinforced concrete building based on 1-year daily measurements. It turns out that the optimal model class including the relevance features for each vibrational mode is capable to capture the pattern between the corresponding modal frequency and the ambient conditions.

  4. Sensitivity analysis in a structural reliability context

    International Nuclear Information System (INIS)

    Lemaitre, Paul

    2014-01-01

    This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)

  5. A standard for test reliability in group research.

    Science.gov (United States)

    Ellis, Jules L

    2013-03-01

    Many authors adhere to the rule that test reliabilities should be at least .70 or .80 in group research. This article introduces a new standard according to which reliabilities can be evaluated. This standard is based on the costs or time of the experiment and of administering the test. For example, if test administration costs are 7 % of the total experimental costs, the efficient value of the reliability is .93. If the actual reliability of a test is equal to this efficient reliability, the test size maximizes the statistical power of the experiment, given the costs. As a standard in experimental research, it is proposed that the reliability of the dependent variable be close to the efficient reliability. Adhering to this standard will enhance the statistical power and reduce the costs of experiments.

  6. Validation of a method for assessing resident physicians' quality improvement proposals.

    Science.gov (United States)

    Leenstra, James L; Beckman, Thomas J; Reed, Darcy A; Mundell, William C; Thomas, Kris G; Krajicek, Bryan J; Cha, Stephen S; Kolars, Joseph C; McDonald, Furman S

    2007-09-01

    Residency programs involve trainees in quality improvement (QI) projects to evaluate competency in systems-based practice and practice-based learning and improvement. Valid approaches to assess QI proposals are lacking. We developed an instrument for assessing resident QI proposals--the Quality Improvement Proposal Assessment Tool (QIPAT-7)-and determined its validity and reliability. QIPAT-7 content was initially obtained from a national panel of QI experts. Through an iterative process, the instrument was refined, pilot-tested, and revised. Seven raters used the instrument to assess 45 resident QI proposals. Principal factor analysis was used to explore the dimensionality of instrument scores. Cronbach's alpha and intraclass correlations were calculated to determine internal consistency and interrater reliability, respectively. QIPAT-7 items comprised a single factor (eigenvalue = 3.4) suggesting a single assessment dimension. Interrater reliability for each item (range 0.79 to 0.93) and internal consistency reliability among the items (Cronbach's alpha = 0.87) were high. This method for assessing resident physician QI proposals is supported by content and internal structure validity evidence. QIPAT-7 is a useful tool for assessing resident QI proposals. Future research should determine the reliability of QIPAT-7 scores in other residency and fellowship training programs. Correlations should also be made between assessment scores and criteria for QI proposal success such as implementation of QI proposals, resident scholarly productivity, and improved patient outcomes.

  7. Reliability analysis with linguistic data: An evidential network approach

    International Nuclear Information System (INIS)

    Zhang, Xiaoge; Mahadevan, Sankaran; Deng, Xinyang

    2017-01-01

    In practical applications of reliability assessment of a system in-service, information about the condition of a system and its components is often available in text form, e.g., inspection reports. Estimation of the system reliability from such text-based records becomes a challenging problem. In this paper, we propose a four-step framework to deal with this problem. In the first step, we construct an evidential network with the consideration of available knowledge and data. Secondly, we train a Naive Bayes text classification algorithm based on the past records. By using the trained Naive Bayes algorithm to classify the new records, we build interval basic probability assignments (BPA) for each new record available in text form. Thirdly, we combine the interval BPAs of multiple new records using an evidence combination approach based on evidence theory. Finally, we propagate the interval BPA through the evidential network constructed earlier to obtain the system reliability. Two numerical examples are used to demonstrate the efficiency of the proposed method. We illustrate the effectiveness of the proposed method by comparing with Monte Carlo Simulation (MCS) results. - Highlights: • We model reliability analysis with linguistic data using evidential network. • Two examples are used to demonstrate the efficiency of the proposed method. • We compare the results with Monte Carlo Simulation (MCS).

  8. A dynamic particle filter-support vector regression method for reliability prediction

    International Nuclear Information System (INIS)

    Wei, Zhao; Tao, Tao; ZhuoShu, Ding; Zio, Enrico

    2013-01-01

    Support vector regression (SVR) has been applied to time series prediction and some works have demonstrated the feasibility of its use to forecast system reliability. For accuracy of reliability forecasting, the selection of SVR's parameters is important. The existing research works on SVR's parameters selection divide the example dataset into training and test subsets, and tune the parameters on the training data. However, these fixed parameters can lead to poor prediction capabilities if the data of the test subset differ significantly from those of training. Differently, the novel method proposed in this paper uses particle filtering to estimate the SVR model parameters according to the whole measurement sequence up to the last observation instance. By treating the SVR training model as the observation equation of a particle filter, our method allows updating the SVR model parameters dynamically when a new observation comes. Because of the adaptability of the parameters to dynamic data pattern, the new PF–SVR method has superior prediction performance over that of standard SVR. Four application results show that PF–SVR is more robust than SVR to the decrease of the number of training data and the change of initial SVR parameter values. Also, even if there are trends in the test data different from those in the training data, the method can capture the changes, correct the SVR parameters and obtain good predictions. -- Highlights: •A dynamic PF–SVR method is proposed to predict the system reliability. •The method can adjust the SVR parameters according to the change of data. •The method is robust to the size of training data and initial parameter values. •Some cases based on both artificial and real data are studied. •PF–SVR shows superior prediction performance over standard SVR

  9. Reliable Rescue Routing Optimization for Urban Emergency Logistics under Travel Time Uncertainty

    Directory of Open Access Journals (Sweden)

    Qiuping Li

    2018-02-01

    Full Text Available The reliability of rescue routes is critical for urban emergency logistics during disasters. However, studies on reliable rescue routing under stochastic networks are still rare. This paper proposes a multiobjective rescue routing model for urban emergency logistics under travel time reliability. A hybrid metaheuristic integrating ant colony optimization (ACO and tabu search (TS was designed to solve the model. An experiment optimizing rescue routing plans under a real urban storm event, was carried out to validate the proposed model. The experimental results showed how our approach can improve rescue efficiency with high travel time reliability.

  10. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  11. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Science.gov (United States)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  12. Damage Model for Reliability Assessment of Solder Joints in Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    environmental factors. Reliability assessment for such type of products conventionally is performed by classical reliability techniques based on test data. Usually conventional reliability approaches are time and resource consuming activities. Thus in this paper we choose a physics of failure approach to define...... damage model by Miner’s rule. Our attention is focused on crack propagation in solder joints of electrical components due to the temperature loadings. Based on the proposed method it is described how to find the damage level for a given temperature loading profile. The proposed method is discussed...

  13. A novel reliability evaluation method for large engineering systems

    Directory of Open Access Journals (Sweden)

    Reda Farag

    2016-06-01

    Full Text Available A novel reliability evaluation method for large nonlinear engineering systems excited by dynamic loading applied in time domain is presented. For this class of problems, the performance functions are expected to be function of time and implicit in nature. Available first- or second-order reliability method (FORM/SORM will be challenging to estimate reliability of such systems. Because of its inefficiency, the classical Monte Carlo simulation (MCS method also cannot be used for large nonlinear dynamic systems. In the proposed approach, only tens instead of hundreds or thousands of deterministic evaluations at intelligently selected points are used to extract the reliability information. A hybrid approach, consisting of the stochastic finite element method (SFEM developed by the author and his research team using FORM, response surface method (RSM, an interpolation scheme, and advanced factorial schemes, is proposed. The method is clarified with the help of several numerical examples.

  14. 78 FR 45447 - Revisions to Modeling, Data, and Analysis Reliability Standard

    Science.gov (United States)

    2013-07-29

    ...; Order No. 782] Revisions to Modeling, Data, and Analysis Reliability Standard AGENCY: Federal Energy... Analysis (MOD) Reliability Standard MOD- 028-2, submitted to the Commission for approval by the North... Organization. The Commission finds that the proposed Reliability Standard represents an improvement over the...

  15. Reliability analysis framework for computer-assisted medical decision systems

    International Nuclear Information System (INIS)

    Habas, Piotr A.; Zurada, Jacek M.; Elmaghraby, Adel S.; Tourassi, Georgia D.

    2007-01-01

    We present a technique that enhances computer-assisted decision (CAD) systems with the ability to assess the reliability of each individual decision they make. Reliability assessment is achieved by measuring the accuracy of a CAD system with known cases similar to the one in question. The proposed technique analyzes the feature space neighborhood of the query case to dynamically select an input-dependent set of known cases relevant to the query. This set is used to assess the local (query-specific) accuracy of the CAD system. The estimated local accuracy is utilized as a reliability measure of the CAD response to the query case. The underlying hypothesis of the study is that CAD decisions with higher reliability are more accurate. The above hypothesis was tested using a mammographic database of 1337 regions of interest (ROIs) with biopsy-proven ground truth (681 with masses, 656 with normal parenchyma). Three types of decision models, (i) a back-propagation neural network (BPNN), (ii) a generalized regression neural network (GRNN), and (iii) a support vector machine (SVM), were developed to detect masses based on eight morphological features automatically extracted from each ROI. The performance of all decision models was evaluated using the Receiver Operating Characteristic (ROC) analysis. The study showed that the proposed reliability measure is a strong predictor of the CAD system's case-specific accuracy. Specifically, the ROC area index for CAD predictions with high reliability was significantly better than for those with low reliability values. This result was consistent across all decision models investigated in the study. The proposed case-specific reliability analysis technique could be used to alert the CAD user when an opinion that is unlikely to be reliable is offered. The technique can be easily deployed in the clinical environment because it is applicable with a wide range of classifiers regardless of their structure and it requires neither additional

  16. System Reliability Analysis Considering Correlation of Performances

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Saekyeol; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Lim, Woochul [Mando Corporation, Seongnam (Korea, Republic of)

    2017-04-15

    Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.

  17. System Reliability Analysis Considering Correlation of Performances

    International Nuclear Information System (INIS)

    Kim, Saekyeol; Lee, Tae Hee; Lim, Woochul

    2017-01-01

    Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.

  18. Mammary fibroadenoma: ductal pattern in pneumo-oncography

    International Nuclear Information System (INIS)

    Pinto Pabon, I.; Garcia Alvarez, A.; Castello Camerlinck, J.

    1988-01-01

    The authors present 25 cases affected by mammary fibroadenoma which underwent pneumo-oncography; in all instances they obtained a characteristic pattern of air distribution, the ductal pattern, which allows fibroadenoma to be reliably diagnosed. No carcinoma demonstrated this type of air pattern. 9 refs.; 3 figs

  19. An evaluation of the multi-state node networks reliability using the traditional binary-state networks reliability algorithm

    International Nuclear Information System (INIS)

    Yeh, W.-C.

    2003-01-01

    A system where the components and system itself are allowed to have a number of performance levels is called the Multi-state system (MSS). A multi-state node network (MNN) is a generalization of the MSS without satisfying the flow conservation law. Evaluating the MNN reliability arises at the design and exploitation stage of many types of technical systems. Up to now, the known existing methods can only evaluate a special MNN reliability called the multi-state node acyclic network (MNAN) in which no cyclic is allowed. However, no method exists for evaluating the general MNN reliability. The main purpose of this article is to show first that each MNN reliability can be solved using any the traditional binary-state networks (TBSN) reliability algorithm with a special code for the state probability. A simple heuristic SDP algorithm based on minimal cuts (MC) for estimating the MNN reliability is presented as an example to show how the TBSN reliability algorithm is revised to solve the MNN reliability problem. To the author's knowledge, this study is the first to discuss the relationships between MNN and TBSN and also the first to present methods to solve the exact and approximated MNN reliability. One example is illustrated to show how the exact MNN reliability is obtained using the proposed algorithm

  20. Scheduling for energy and reliability management on multiprocessor real-time systems

    Science.gov (United States)

    Qi, Xuan

    Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.

  1. A method of bias correction for maximal reliability with dichotomous measures.

    Science.gov (United States)

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  2. How to Measure the Onset of Babbling Reliably?

    Science.gov (United States)

    Molemans, Inge; van den Berg, Renate; van Severen, Lieve; Gillis, Steven

    2012-01-01

    Various measures for identifying the onset of babbling have been proposed in the literature, but a formal definition of the exact procedure and a thorough validation of the sample size required for reliably establishing babbling onset is lacking. In this paper the reliability of five commonly used measures is assessed using a large longitudinal…

  3. A G-function-based reliability-based design methodology applied to a cam roller system

    International Nuclear Information System (INIS)

    Wang, W.; Sui, P.; Wu, Y.T.

    1996-01-01

    Conventional reliability-based design optimization methods treats the reliability function as an ordinary function and applies existing mathematical programming techniques to solve the design problem. As a result, the conventional approach requires nested loops with respect to g-function, and is very time consuming. A new reliability-based design method is proposed in this paper that deals with the g-function directly instead of the reliability function. This approach has the potential of significantly reducing the number of calls for g-function calculations since it requires only one full reliability analysis in a design iteration. A cam roller system in a typical high pressure fuel injection diesel engine is designed using both the proposed and the conventional approach. The proposed method is much more efficient for this application

  4. An Enhanced Backbone-Assisted Reliable Framework for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Amna Ali

    2010-03-01

    Full Text Available An extremely reliable source to sink communication is required for most of the contemporary WSN applications especially pertaining to military, healthcare and disaster-recovery. However, due to their intrinsic energy, bandwidth and computational constraints, Wireless Sensor Networks (WSNs encounter several challenges in reliable source to sink communication. In this paper, we present a novel reliable topology that uses reliable hotlines between sensor gateways to boost the reliability of end-to-end transmissions. This reliable and efficient routing alternative reduces the number of average hops from source to the sink. We prove, with the help of analytical evaluation, that communication using hotlines is considerably more reliable than traditional WSN routing. We use reliability theory to analyze the cost and benefit of adding gateway nodes to a backbone-assisted WSN. However, in hotline assisted routing some scenarios where source and the sink are just a couple of hops away might bring more latency, therefore, we present a Signature Based Routing (SBR scheme. SBR enables the gateways to make intelligent routing decisions, based upon the derived signature, hence providing lesser end-to-end delay between source to the sink communication. Finally, we evaluate our proposed hotline based topology with the help of a simulation tool and show that the proposed topology provides manifold increase in end-to-end reliability.

  5. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  6. Transmission cost allocation based on power flow tracing considering reliability benefit

    International Nuclear Information System (INIS)

    Leepreechanon, N.; Singharerg, S.; Padungwech, W.; Nakawiro, W.; Eua-Arporn, B.; David, A.K.

    2007-01-01

    Power transmission networks must be able to accommodate the continuously growing demand for reliable and economical electricity. This paper presented a method to allocate transmission use and reliability cost to both generators and end-consumers. Although transmission cost allocation methods change depending on the local context of the electric power industry, there is a common principle that transmission line capacity should be properly allocated to accommodate actual power delivery with an adequate reliability margin. The method proposed in this paper allocates transmission embedded cost to both generators and loads in an equitable manner, incorporating probability indices to allocate transmission reliability margin among users in both supply and demand sides. The application of the proposed method was illustrated using Bialek's tracing method on a multiple-circuit, six-bus transmission system. Probabilistic indices known as the transmission internal reliability margin (TIRM) and transmission external reliability margin (TERM) decomposed from the transmission reliability margin (TRM) were introduced, making true cost of using overall transmission facilities. 6 refs., 11 tabs., 5 figs

  7. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  8. Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection.

    Science.gov (United States)

    Zhang, Minliang; Chen, Qian; Tao, Tianyang; Feng, Shijie; Hu, Yan; Li, Hui; Zuo, Chao

    2017-08-21

    Temporal phase unwrapping (TPU) is an essential algorithm in fringe projection profilometry (FPP), especially when measuring complex objects with discontinuities and isolated surfaces. Among others, the multi-frequency TPU has been proven to be the most reliable algorithm in the presence of noise. For a practical FPP system, in order to achieve an accurate, efficient, and reliable measurement, one needs to make wise choices about three key experimental parameters: the highest fringe frequency, the phase-shifting steps, and the fringe pattern sequence. However, there was very little research on how to optimize these parameters quantitatively, especially considering all three aspects from a theoretical and analytical perspective simultaneously. In this work, we propose a new scheme to determine simultaneously the optimal fringe frequency, phase-shifting steps and pattern sequence under multi-frequency TPU, robustly achieving high accuracy measurement by a minimum number of fringe frames. Firstly, noise models regarding phase-shifting algorithms as well as 3-D coordinates are established under a projector defocusing condition, which leads to the optimal highest fringe frequency for a FPP system. Then, a new concept termed frequency-to-frame ratio (FFR) that evaluates the magnitude of the contribution of each frame for TPU is defined, on which an optimal phase-shifting combination scheme is proposed. Finally, a judgment criterion is established, which can be used to judge whether the ratio between adjacent fringe frequencies is conducive to stably and efficiently unwrapping the phase. The proposed method provides a simple and effective theoretical framework to improve the accuracy, efficiency, and robustness of a practical FPP system in actual measurement conditions. The correctness of the derived models as well as the validity of the proposed schemes have been verified through extensive simulations and experiments. Based on a normal monocular 3-D FPP hardware system

  9. Experimental research of fuel element reliability

    International Nuclear Information System (INIS)

    Cech, B.; Novak, J.; Chamrad, B.

    1980-01-01

    The rate and extent of the damage of the can integrity for fission products is the basic criterion of reliability. The extent of damage is measurable by the fission product leakage into the reactor coolant circuit. An analysis is made of the causes of the fuel element can damage and a model is proposed for testing fuel element reliability. Special experiments should be carried out to assess partial processes, such as heat transfer and fuel element surface temperature, fission gas liberation and pressure changes inside the element, corrosion weakening of the can wall, can deformation as a result of mechanical interactions. The irradiation probe for reliability testing of fuel elements is described. (M.S.)

  10. Evolution of Machine Reliability and Life and Economics of Operational Use

    Directory of Open Access Journals (Sweden)

    Młynarski Stanisław

    2016-12-01

    Full Text Available The article presents new assumptions for reliability and life of machines, resulting from the development of technology. The innovative approach to reliability and life design as well as warranty duration planning is presented on an example of vehicle reliability characteristics. A new algorithm is proposed for the replacement of repairable objects costs by the price of life and reliability of new unrepairable ones. For the planning of the life of innovative machines, an effective method of technical progress rate determination is proposed. In conclusion, necessary modifications of machine and vehicle use systems, resulting from technology evolution and technical progress, are indicated. Finally, recommendations and directions of indispensable research in engineering and management of technical means of production are formulated.

  11. Reliability Evaluation of Service-Oriented Architecture Systems Considering Fault-Tolerance Designs

    Directory of Open Access Journals (Sweden)

    Kuan-Li Peng

    2014-01-01

    strategies. Sensitivity analysis of SOA at both coarse and fine grain levels is also studied, which can be used to efficiently identify the critical parts within the system. Two SOA system scenarios based on real industrial practices are studied. Experimental results show that the proposed SOA model can be used to accurately depict the behavior of SOA systems. Additionally, a sensitivity analysis that quantizes the effects of system structure as well as fault tolerance on the overall reliability is also studied. On the whole, the proposed reliability modeling and analysis framework may help the SOA system service provider to evaluate the overall system reliability effectively and also make smarter improvement plans by focusing resources on enhancing reliability-sensitive parts within the system.

  12. A Method for Improving Reliability of Radiation Detection using Deep Learning Framework

    International Nuclear Information System (INIS)

    Chang, Hojong; Kim, Tae-Ho; Han, Byunghun; Kim, Hyunduk; Kim, Ki-duk

    2017-01-01

    Radiation detection is essential technology for overall field of radiation and nuclear engineering. Previously, technology for radiation detection composes of preparation of the table of the input spectrum to output spectrum in advance, which requires simulation of numerous predicted output spectrum with simulation using parameters modeling the spectrum. In this paper, we propose new technique to improve the performance of radiation detector. The software in the radiation detector has been stagnant for a while with possible intrinsic error of simulation. In the proposed method, to predict the input source using output spectrum measured by radiation detector is performed using deep neural network. With highly complex model, we expect that the complex pattern between data and the label can be captured well. Furthermore, the radiation detector should be calibrated regularly and beforehand. We propose a method to calibrate radiation detector using GAN. We hope that the power of deep learning may also reach to radiation detectors and make huge improvement on the field. Using improved radiation detector, the reliability of detection would be confident, and there are many tasks remaining to solve using deep learning in nuclear engineering society.

  13. A reliability evaluation method for NPP safety DCS application software

    International Nuclear Information System (INIS)

    Li Yunjian; Zhang Lei; Liu Yuan

    2014-01-01

    In the field of nuclear power plant (NPP) digital i and c application, reliability evaluation for safety DCS application software is a key obstacle to be removed. In order to quantitatively evaluate reliability of NPP safety DCS application software, this paper propose a reliability evaluating method based on software development life cycle every stage's v and v defects density characteristics, by which the operating reliability level of the software can be predicted before its delivery, and helps to improve the reliability of NPP safety important software. (authors)

  14. Reliability-oriented Design of a Cost-effective Active Capacitor

    DEFF Research Database (Denmark)

    Wang, Haoran; Wang, Huai

    2017-01-01

    This paper presents the reliability-oriented design of a two-terminal active capacitor proposed recently. The two-terminal active capacitor has the same level of convenience as a passive capacitor with reduced requirement of overall energy storage. In order to fully explore the potential...... of the proposed concept, a comprehensive design procedure is necessary to optimally sizing the key components of the active capacitor in terms of cost and reliability. Moreover, the inherent condition monitoring capability of the active capacitor is discussed by utilizing the existing feedback signals. A 500 W...

  15. Reliability Models Applied to a System of Power Converters in Particle Accelerators

    OpenAIRE

    Siemaszko, D; Speiser, M; Pittet, S

    2012-01-01

    Several reliability models are studied when applied to a power system containing a large number of power converters. A methodology is proposed and illustrated in the case study of a novel linear particle accelerator designed for reaching high energies. The proposed methods result in the prediction of both reliability and availability of the considered system for optimisation purposes.

  16. Reliability evaluation of the ECCS of LWR No.2

    International Nuclear Information System (INIS)

    Tsujimura, Yasuhiro; Suzuki, Eiji

    1987-01-01

    In this paper, a new characteristic function of probability importance is proposed and discussed. The function represents overall characteristics of the system reliability relating to a failure probability of each system component. Further, results of evaluation brought about by the method for practical system reliability design are shown. (author)

  17. Reliable and rapid characterization of functional FCN2 gene variants reveals diverse geographical patterns

    Directory of Open Access Journals (Sweden)

    Ojurongbe Olusola

    2012-05-01

    Full Text Available Abstract Background Ficolin-2 coded by FCN2 gene is a soluble serum protein and an innate immune recognition element of the complement system. FCN2 gene polymorphisms reveal distinct geographical patterns and are documented to alter serum ficolin levels and modulate disease susceptibility. Methods We employed a real-time PCR based on Fluorescence Resonance Energy Transfer (FRET method to genotype four functional SNPs including -986 G > A (#rs3124952, -602 G > A (#rs3124953, -4A > G (#rs17514136 and +6424 G > T (#rs7851696 in the ficolin-2 (FCN2 gene. We characterized the FCN2 variants in individuals representing Brazilian (n = 176, Nigerian (n = 180, Vietnamese (n = 172 and European Caucasian ethnicity (n = 165. Results We observed that the genotype distribution of three functional SNP variants (−986 G > A, -602 G > A and -4A > G differ significantly between the populations investigated (p p  Conclusions The observed distribution of the FCN2 functional SNP variants may likely contribute to altered serum ficolin levels and this may depend on the different disease settings in world populations. To conclude, the use of FRET based real-time PCR especially for FCN2 gene will benefit a larger scientific community who extensively depend on rapid, reliable method for FCN2 genotyping.

  18. Design of Accelerated Reliability Test for CNC Motorized Spindle Based on Vibration Signal

    Directory of Open Access Journals (Sweden)

    Chen Chao

    2016-01-01

    Full Text Available Motorized spindle is the key functional component of CNC machining centers which is a mechatronics system with long life and high reliability. The reliability test cycle of motorized spindle is too long and infeasible. This paper proposes a new accelerated test for reliability evaluation of motorized spindle. By field reliability test, authors collect and calculate the load data including rotational speed, cutting force and torque. Load spectrum distribution law is analyzed. And authors design a test platform to apply the load spectrum. A new method to define the fuzzy acceleration factor based on the vibration signal is proposed. Then the whole test plan of accelerated reliability test is done.

  19. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.

    Science.gov (United States)

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming

    2016-08-01

    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.

  20. Reliability and relative validity of a child nutrition questionnaire to simultaneously assess dietary patterns associated with positive energy balance and food behaviours, attitudes, knowledge and environments associated with healthy eating

    Directory of Open Access Journals (Sweden)

    Magarey Anthea M

    2008-01-01

    Full Text Available Abstract Background Food behaviours, attitudes, environments and knowledge are relevant to professionals in childhood obesity prevention, as are dietary patterns which promote positive energy balance. There is a lack of valid and reliable tools to measure these parameters. The aim of this study was to determine the reliability and relative validity of a child nutrition questionnaire assessing all of these parameters, used in the evaluation of a community-based childhood obesity prevention project. Methods The development of the 14-item questionnaire was informed by the aims of the obesity prevention project. A sub-sample of children aged 10–12 years from primary schools involved in the intervention was recruited at the project's baseline data collection (Test 1. Questionnaires were readministered (Test 2 following which students completed a 7-day food diary designed to reflect the questionnaire. Twelve scores were derived to assess consumption of fruit, vegetables, water, noncore foods and sweetened beverages plus food knowledge, behaviours, attitudes and environments. Reliability was assessed using (a the intra class correlation coefficient (ICC and 95% confidence intervals to compare scores from Tests 1 and 2 (test-retest reliability and (b Cronbach's alpha (internal consistency. Validity was assessed with Spearman correlations, bias and limits of agreement between scores from Test 1 and the 7-day diaries. The Wilcoxon signed rank test checked for significant differences between mean scores. Results One hundred and forty one students consented to the study. Test 2 (n = 134 occurred between eight and 36 days after Test 1. For 10/12 scores ICCs ranged from 0.47–0.66 (p 0.05 for 10/12 (test-retest reliability and 3/7 (validity scores. Conclusion This child nutrition questionnaire is a valid and reliable tool to simultaneously assess dietary patterns associated with positive energy balance, and food behaviours, attitudes and environments in

  1. Evaluation of mobile ad hoc network reliability using propagation-based link reliability model

    International Nuclear Information System (INIS)

    Padmavathy, N.; Chaturvedi, Sanjay K.

    2013-01-01

    A wireless mobile ad hoc network (MANET) is a collection of solely independent nodes (that can move randomly around the area of deployment) making the topology highly dynamic; nodes communicate with each other by forming a single hop/multi-hop network and maintain connectivity in decentralized manner. MANET is modelled using geometric random graphs rather than random graphs because the link existence in MANET is a function of the geometric distance between the nodes and the transmission range of the nodes. Among many factors that contribute to the MANET reliability, the reliability of these networks also depends on the robustness of the link between the mobile nodes of the network. Recently, the reliability of such networks has been evaluated for imperfect nodes (transceivers) with binary model of communication links based on the transmission range of the mobile nodes and the distance between them. However, in reality, the probability of successful communication decreases as the signal strength deteriorates due to noise, fading or interference effects even up to the nodes' transmission range. Hence, in this paper, using a propagation-based link reliability model rather than a binary-model with nodes following a known failure distribution to evaluate the network reliability (2TR m , ATR m and AoTR m ) of MANET through Monte Carlo Simulation is proposed. The method is illustrated with an application and some imperative results are also presented

  2. Reliable Ant Colony Routing Algorithm for Dual-Channel Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    YongQiang Li

    2018-01-01

    Full Text Available For the problem of poor link reliability caused by high-speed dynamic changes and congestion owing to low network bandwidth in ad hoc networks, an ant colony routing algorithm, based on reliable path under dual-channel condition (DSAR, is proposed. First, dual-channel communication mode is used to improve network bandwidth, and a hierarchical network model is proposed to optimize the dual-layer network. Thus, we reduce network congestion and communication delay. Second, a comprehensive reliable path selection strategy is designed, and the reliable path is selected ahead of time to reduce the probability of routing restart. Finally, the ant colony algorithm is used to improve the adaptability of the routing algorithm to changes of network topology. Simulation results show that DSAR improves the reliability of routing, packet delivery, and throughput.

  3. A heuristic-based approach for reliability importance assessment of energy producers

    International Nuclear Information System (INIS)

    Akhavein, A.; Fotuhi Firuzabad, M.

    2011-01-01

    Reliability of energy supply is one of the most important issues of service quality. On one hand, customers usually have different expectations for service reliability and price. On the other hand, providing different level of reliability at load points is a challenge for system operators. In order to take reasonable decisions and obviate reliability implementation difficulties, market players need to know impacts of their assets on system and load-point reliabilities. One tool to specify reliability impacts of assets is the criticality or reliability importance measure by which system components can be ranked based on their effect on reliability. Conventional methods for determination of reliability importance are essentially on the basis of risk sensitivity analysis and hence, impose prohibitive calculation burden in large power systems. An approach is proposed in this paper to determine reliability importance of energy producers from perspective of consumers or distribution companies in a composite generation and transmission system. In the presented method, while avoiding immense computational burden, the energy producers are ranked based on their rating, unavailability and impact on power flows in the lines connecting to the considered load points. Study results on the IEEE reliability test system show successful application of the proposed method. - Research highlights: → Required reliability level at load points is a concern in modern power systems. → It is important to assess reliability importance of energy producers or generators. → Generators can be ranked based on their impacts on power flow to a selected area. → Ranking of generators is an efficient tool to assess their reliability importance.

  4. Reliability assessment of aging structures subjected to gradual and shock deteriorations

    International Nuclear Information System (INIS)

    Wang, Cao; Zhang, Hao; Li, Quanwang

    2017-01-01

    Civil structures and infrastructure facilities are susceptible to deterioration posed by the effects of natural hazards and aggressive environmental conditions. These factors may increase the risk of service interruption of infrastructures, and should be taken into account when assessing the structural reliability during an infrastructure's service life. Modeling the resistance deterioration process reasonably is the basis for structural reliability analysis. In this paper, a novel model is developed for describing the deterioration of aging structures. The deterioration is a combination of two stochastic processes: the gradual deterioration posed by environmental effects and the shock deterioration caused by severe load attacks. The dependency of the deterioration magnitude on the load intensity is considered. The Gaussian copula function is employed to help construct the joint distribution of correlated random variables. Semi-analytical methods are developed to assess the structural failure time and the number of significant load events (shocks) to failure. Illustrative examples are presented to demonstrate the applicability of the proposed model in structural reliability analysis. Parametric studies are performed to investigate the role of deterioration-load correlation in structural reliability. - Highlights: • A new resistance deterioration model for aging structures is proposed. • Time-dependent reliability analysis methods incorporating the proposed deterioration model are developed. • Parametric studies are performed to investigate the role of deterioration-load correlation in structural reliability.

  5. Personal authentication through dorsal hand vein patterns

    Science.gov (United States)

    Hsu, Chih-Bin; Hao, Shu-Sheng; Lee, Jen-Chun

    2011-08-01

    Biometric identification is an emerging technology that can solve security problems in our networked society. A reliable and robust personal verification approach using dorsal hand vein patterns is proposed in this paper. The characteristic of the approach needs less computational and memory requirements and has a higher recognition accuracy. In our work, the near-infrared charge-coupled device (CCD) camera is adopted as an input device for capturing dorsal hand vein images, it has the advantages of the low-cost and noncontact imaging. In the proposed approach, two finger-peaks are automatically selected as the datum points to define the region of interest (ROI) in the dorsal hand vein images. The modified two-directional two-dimensional principal component analysis, which performs an alternate two-dimensional PCA (2DPCA) in the column direction of images in the 2DPCA subspace, is proposed to exploit the correlation of vein features inside the ROI between images. The major advantage of the proposed method is that it requires fewer coefficients for efficient dorsal hand vein image representation and recognition. The experimental results on our large dorsal hand vein database show that the presented schema achieves promising performance (false reject rate: 0.97% and false acceptance rate: 0.05%) and is feasible for dorsal hand vein recognition.

  6. Pattern recognition of neurotransmitters using multimode sensing.

    Science.gov (United States)

    Stefan-van Staden, Raluca-Ioana; Moldoveanu, Iuliana; van Staden, Jacobus Frederick

    2014-05-30

    Pattern recognition is essential in chemical analysis of biological fluids. Reliable and sensitive methods for neurotransmitters analysis are needed. Therefore, we developed for pattern recognition of neurotransmitters: dopamine, epinephrine, norepinephrine a method based on multimode sensing. Multimode sensing was performed using microsensors based on diamond paste modified with 5,10,15,20-tetraphenyl-21H,23H-porphyrine, hemin and protoporphyrin IX in stochastic and differential pulse voltammetry modes. Optimized working conditions: phosphate buffer solution of pH 3.01 and KCl 0.1mol/L (as electrolyte support), were determined using cyclic voltammetry and used in all measurements. The lowest limits of quantification were: 10(-10)mol/L for dopamine and epinephrine, and 10(-11)mol/L for norepinephrine. The multimode microsensors were selective over ascorbic and uric acids and the method facilitated reliable assay of neurotransmitters in urine samples, and therefore, the pattern recognition showed high reliability (RSDneurotransmitters on biological fluids at a lower determination level than chromatographic methods. The sampling of the biological fluids referees only to the buffering (1:1, v/v) with a phosphate buffer pH 3.01, while for chromatographic methods the sampling is laborious. Accordingly with the statistic evaluation of the results at 99.00% confidence level, both modes can be used for pattern recognition and quantification of neurotransmitters with high reliability. The best multimode microsensor was the one based on diamond paste modified with protoporphyrin IX. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. On Improving Reliability of SRAM-Based Physically Unclonable Functions

    Directory of Open Access Journals (Sweden)

    Arunkumar Vijayakumar

    2017-01-01

    Full Text Available Physically unclonable functions (PUFs have been touted for their inherent resistance to invasive attacks and low cost in providing a hardware root of trust for various security applications. SRAM PUFs in particular are popular in industry for key/ID generation. Due to intrinsic process variations, SRAM cells, ideally, tend to have the same start-up behavior. SRAM PUFs exploit this start-up behavior. Unfortunately, not all SRAM cells exhibit reliable start-up behavior due to noise susceptibility. Hence, design enhancements are needed for improving reliability. Some of the proposed enhancements in literature include fuzzy extraction, error-correcting codes and voting mechanisms. All enhancements involve a trade-off between area/power/performance overhead and PUF reliability. This paper presents a design enhancement technique for reliability that improves upon previous solutions. We present simulation results to quantify improvement in SRAM PUF reliability and efficiency. The proposed technique is shown to generate a 128-bit key in ≤0.2 μ s at an area estimate of 4538 μ m 2 with error rate as low as 10 − 6 for intrinsic error probability of 15%.

  8. Automated Fringe Pattern Acquisition for Portable Laser Shearography

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Yusnisyam Yusof; Wan Saffiey Wan Abdullah

    2013-01-01

    In shearography system one of the most important tasks is the automatic, fast, reliable processing of the fringe patterns to allow real time inspection. Development of digital CCD cameras, the PC (fast and small data acquisition), high power lasers have led to dramatic performance improvements in shearography instruments and systems. This paper concentrates on development of fringe pattern acquisition using digital CCD camera for portable laser shearography system. A new program for fringe pattern processing which incorporates rapid methods for automatic fringe pattern acquisition and filtering has been developed. The algorithm is written using MATLAB. A graphical user interface with several functions was developed to ensure an easy adaptation in custom applications and providing a flexible way for additional functions. The preliminary results show that the developed algorithm can be used to generate good contrast and reliable fringe pattern. (author)

  9. A Weighted Kappa Coefficient for Three Observers as a Measure for Reliability of Expert Ratings on Characteristics in Handball Throwing Patterns

    Science.gov (United States)

    Schorer, Jorg; Weiss, Christel

    2007-01-01

    Many methods for the identification of key variables in movement patterns have the problem that direct transferability back to coaches or players is not easy; this is because the variables used are not presented in the language of most sport participants. Using a research strategy--from the coaches back to the coaches--proposed by Roth (1996),…

  10. Reliability-based design optimization via high order response surface method

    International Nuclear Information System (INIS)

    Li, Hong Shuang

    2013-01-01

    To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.

  11. Verification of Triple Modular Redundancy (TMR) Insertion for Reliable and Trusted Systems

    Science.gov (United States)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems. If a system is expected to be protected using TMR, improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. This manuscript addresses the challenge of confirming that TMR has been inserted without corruption of functionality and with correct application of the expected TMR topology. The proposed verification method combines the usage of existing formal analysis tools with a novel search-detect-and-verify tool. Field programmable gate array (FPGA),Triple Modular Redundancy (TMR),Verification, Trust, Reliability,

  12. Increase of hydroelectric power plant operation reliability

    International Nuclear Information System (INIS)

    Koshumbaev, M.B.

    2006-01-01

    The new design of the turbine of hydroelectric power plant (HPP) is executed in the form of a pipe with plates. Proposed solution allows increasing the hydroelectric power plant capacity at existing head and water flow. At that time the HPP turbine reliability is increase, its operation performances are improving. Design efficiency is effective mostly for small-scale and micro-HPP due to reliable operation, low-end technology, and harmless ecological application. (author)

  13. A Reliability Test of a Complex System Based on Empirical Likelihood

    OpenAIRE

    Zhou, Yan; Fu, Liya; Zhang, Jun; Hui, Yongchang

    2016-01-01

    To analyze the reliability of a complex system described by minimal paths, an empirical likelihood method is proposed to solve the reliability test problem when the subsystem distributions are unknown. Furthermore, we provide a reliability test statistic of the complex system and extract the limit distribution of the test statistic. Therefore, we can obtain the confidence interval for reliability and make statistical inferences. The simulation studies also demonstrate the theorem results.

  14. Physics of Failure as a Basis for Solder Elements Reliability Assessment in Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    description of the reliability. A physics of failure approach is applied. A SnAg solder component used in power electronics is used as an example. Crack propagation in the SnAg solder is modeled and a model to assess the accumulated plastic strain is proposed based on a physics of failure approach. Based...... on the proposed model it is described how to find the accumulated linear damage and reliability levels for a given temperature loading profile. Using structural reliability methods the reliability levels of the electrical components are assessed by introducing scale factors for stresses....

  15. Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2013-01-01

    configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...... was applied for the systems failure functions estimation. It is desired to compare the results with the true system failure function, which is possible to estimate using simulation techniques. Theoretical model development should be applied for the further research. One of the directions for it might...... be modeling the system based on the Sequential Order Statistics, by considering the failure of the minimum (weakest component) at each loading level. The proposed idea to represent the system by the independent components could also be used for modeling reliability by Sequential Order Statistics....

  16. Reliability analysis for thermal cutting method based non-explosive separation device

    International Nuclear Information System (INIS)

    Choi, Jun Woo; Hwang, Kuk Ha; Kim, Byung Kyu

    2016-01-01

    In order to increase the reliability of a separation device for a small satellite, a new non-explosive separation device is invented. This device is activated using a thermal cutting method with a Ni-Cr wire. A reliability analysis is carried out for the proposed non-explosive separation device by applying the Fault tree analysis (FTA) method. In the FTA results for the separation device, only ten single-point failure modes are found. The reliability modeling and analysis for the device are performed considering failure of the power supply, the Ni-Cr wire burns failure and unwinds, the holder separation failure, the balls separation failure, and the pin release failure. Ultimately, the reliability of the proposed device is calculated as 0.999989 with five Ni-Cr wire coils

  17. Reliability analysis for thermal cutting method based non-explosive separation device

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Woo; Hwang, Kuk Ha; Kim, Byung Kyu [Korea Aerospace University, Goyang (Korea, Republic of)

    2016-12-15

    In order to increase the reliability of a separation device for a small satellite, a new non-explosive separation device is invented. This device is activated using a thermal cutting method with a Ni-Cr wire. A reliability analysis is carried out for the proposed non-explosive separation device by applying the Fault tree analysis (FTA) method. In the FTA results for the separation device, only ten single-point failure modes are found. The reliability modeling and analysis for the device are performed considering failure of the power supply, the Ni-Cr wire burns failure and unwinds, the holder separation failure, the balls separation failure, and the pin release failure. Ultimately, the reliability of the proposed device is calculated as 0.999989 with five Ni-Cr wire coils.

  18. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    Science.gov (United States)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  19. Optimal number of tests to achieve and validate product reliability

    International Nuclear Information System (INIS)

    Ahmed, Hussam; Chateauneuf, Alaa

    2014-01-01

    The reliability validation of engineering products and systems is mandatory for choosing the best cost-effective design among a series of alternatives. Decisions at early design stages have a large effect on the overall life cycle performance and cost of products. In this paper, an optimization-based formulation is proposed by coupling the costs of product design and validation testing, in order to ensure the product reliability with the minimum number of tests. This formulation addresses the question about the number of tests to be specified through reliability demonstration necessary to validate the product under appropriate confidence level. The proposed formulation takes into account the product cost, the failure cost and the testing cost. The optimization problem can be considered as a decision making system according to the hierarchy of structural reliability measures. The numerical examples show the interest of coupling design and testing parameters. - Highlights: • Coupled formulation for design and testing costs, with lifetime degradation. • Cost-effective testing optimization to achieve reliability target. • Solution procedure for nested aleatoric and epistemic variable spaces

  20. A hybrid approach to quantify software reliability in nuclear safety systems

    International Nuclear Information System (INIS)

    Arun Babu, P.; Senthil Kumar, C.; Murali, N.

    2012-01-01

    Highlights: ► A novel method to quantify software reliability using software verification and mutation testing in nuclear safety systems. ► Contributing factors that influence software reliability estimate. ► Approach to help regulators verify the reliability of safety critical software system during software licensing process. -- Abstract: Technological advancements have led to the use of computer based systems in safety critical applications. As computer based systems are being introduced in nuclear power plants, effective and efficient methods are needed to ensure dependability and compliance to high reliability requirements of systems important to safety. Even after several years of research, quantification of software reliability remains controversial and unresolved issue. Also, existing approaches have assumptions and limitations, which are not acceptable for safety applications. This paper proposes a theoretical approach combining software verification and mutation testing to quantify the software reliability in nuclear safety systems. The theoretical results obtained suggest that the software reliability depends on three factors: the test adequacy, the amount of software verification carried out and the reusability of verified code in the software. The proposed approach may help regulators in licensing computer based safety systems in nuclear reactors.

  1. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  2. A new algorithm for finding survival coefficients employed in reliability equations

    Science.gov (United States)

    Bouricius, W. G.; Flehinger, B. J.

    1973-01-01

    Product reliabilities are predicted from past failure rates and reasonable estimate of future failure rates. Algorithm is used to calculate probability that product will function correctly. Algorithm sums the probabilities of each survival pattern and number of permutations for that pattern, over all possible ways in which product can survive.

  3. A fast approximation method for reliability analysis of cold-standby systems

    International Nuclear Information System (INIS)

    Wang, Chaonan; Xing, Liudong; Amari, Suprasad V.

    2012-01-01

    Analyzing reliability of large cold-standby systems has been a complicated and time-consuming task, especially for systems with components having non-exponential time-to-failure distributions. In this paper, an approximation model, which is based on the central limit theorem, is presented for the reliability analysis of binary cold-standby systems. The proposed model can estimate the reliability of large cold-standby systems with binary-state components having arbitrary time-to-failure distributions in an efficient and easy way. The accuracy and efficiency of the proposed method are illustrated using several different types of distributions for both 1-out-of-n and k-out-of-n cold-standby systems.

  4. Metrological Reliability of Medical Devices

    Science.gov (United States)

    Costa Monteiro, E.; Leon, L. F.

    2015-02-01

    The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

  5. Reliability Assessment Method of Reactor Protection System Software by Using V and Vbased Bayesian Nets

    International Nuclear Information System (INIS)

    Eom, H. S.; Park, G. Y.; Kang, H. G.; Son, H. S.

    2010-07-01

    Developed a methodology which can be practically used in quantitative reliability assessment of a safety c ritical software for a protection system of nuclear power plants. The base of the proposed methodology is V and V being used in the nuclear industry, which means that it is not affected with specific software development environments or parameters that are necessary for the reliability calculation. Modular and formal sub-BNs in the proposed methodology is useful tool to constitute the whole BN model for reliability assessment of a target software. The proposed V and V based BN model estimates the defects in the software according to the performance of V and V results and then calculate reliability of the software. A case study was carried out to validate the proposed methodology. The target software is the RPS SW which was developed by KNICS project

  6. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation.

    Science.gov (United States)

    Chen, Qing; Zhang, Jinxiu; Hu, Ze

    2017-02-23

    This article investigates the dynamic topology control problemof satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites' relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.

  7. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation

    Directory of Open Access Journals (Sweden)

    Qing Chen

    2017-02-01

    Full Text Available This article investigates the dynamic topology control problemof satellite cluster networks (SCNs in Earth observation (EO missions by applying a novel metric of stability for inter-satellite links (ISLs. The properties of the periodicity and predictability of satellites’ relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.

  8. Efficient discovery of risk patterns in medical data.

    Science.gov (United States)

    Li, Jiuyong; Fu, Ada Wai-chee; Fahey, Paul

    2009-01-01

    This paper studies a problem of efficiently discovering risk patterns in medical data. Risk patterns are defined by a statistical metric, relative risk, which has been widely used in epidemiological research. To avoid fruitless search in the complete exploration of risk patterns, we define optimal risk pattern set to exclude superfluous patterns, i.e. complicated patterns with lower relative risk than their corresponding simpler form patterns. We prove that mining optimal risk pattern sets conforms an anti-monotone property that supports an efficient mining algorithm. We propose an efficient algorithm for mining optimal risk pattern sets based on this property. We also propose a hierarchical structure to present discovered patterns for the easy perusal by domain experts. The proposed approach is compared with two well-known rule discovery methods, decision tree and association rule mining approaches on benchmark data sets and applied to a real world application. The proposed method discovers more and better quality risk patterns than a decision tree approach. The decision tree method is not designed for such applications and is inadequate for pattern exploring. The proposed method does not discover a large number of uninteresting superfluous patterns as an association mining approach does. The proposed method is more efficient than an association rule mining method. A real world case study shows that the method reveals some interesting risk patterns to medical practitioners. The proposed method is an efficient approach to explore risk patterns. It quickly identifies cohorts of patients that are vulnerable to a risk outcome from a large data set. The proposed method is useful for exploratory study on large medical data to generate and refine hypotheses. The method is also useful for designing medical surveillance systems.

  9. Multi-state time-varying reliability evaluation of smart grid with flexible demand resources utilizing Lz transform

    Science.gov (United States)

    Jia, Heping; Jin, Wende; Ding, Yi; Song, Yonghua; Yu, Dezhao

    2017-01-01

    With the expanding proportion of renewable energy generation and development of smart grid technologies, flexible demand resources (FDRs) have been utilized as an approach to accommodating renewable energies. However, multiple uncertainties of FDRs may influence reliable and secure operation of smart grid. Multi-state reliability models for a single FDR and aggregating FDRs have been proposed in this paper with regard to responsive abilities for FDRs and random failures for both FDR devices and information system. The proposed reliability evaluation technique is based on Lz transform method which can formulate time-varying reliability indices. A modified IEEE-RTS has been utilized as an illustration of the proposed technique.

  10. A reliability analysis tool for SpaceWire network

    Science.gov (United States)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  11. Structural reliability calculation method based on the dual neural network and direct integration method.

    Science.gov (United States)

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  12. Design and Implementation of Secure and Reliable Communication using Optical Wireless Communication

    Science.gov (United States)

    Saadi, Muhammad; Bajpai, Ambar; Zhao, Yan; Sangwongngam, Paramin; Wuttisittikulkij, Lunchakorn

    2014-11-01

    Wireless networking intensify the tractability in the home and office environment to connect the internet without wires but at the cost of risks associated with stealing the data or threat of loading malicious code with the intention of harming the network. In this paper, we proposed a novel method of establishing a secure and reliable communication link using optical wireless communication (OWC). For security, spatial diversity based transmission using two optical transmitters is used and the reliability in the link is achieved by a newly proposed method for the construction of structured parity check matrix for binary Low Density Parity Check (LDPC) codes. Experimental results show that a successful secure and reliable link between the transmitter and the receiver can be achieved by using the proposed novel technique.

  13. 75 FR 14103 - Version One Regional Reliability Standard for Resource and Demand Balancing

    Science.gov (United States)

    2010-03-24

    ... current regional Reliability Standard was developed and used under a manual interchange transaction...] Version One Regional Reliability Standard for Resource and Demand Balancing March 18, 2010. AGENCY... section 215 of the Federal Power Act, the Commission proposes to remand a revised regional Reliability...

  14. Time-variant reliability assessment through equivalent stochastic process transformation

    International Nuclear Information System (INIS)

    Wang, Zequn; Chen, Wei

    2016-01-01

    Time-variant reliability measures the probability that an engineering system successfully performs intended functions over a certain period of time under various sources of uncertainty. In practice, it is computationally prohibitive to propagate uncertainty in time-variant reliability assessment based on expensive or complex numerical models. This paper presents an equivalent stochastic process transformation approach for cost-effective prediction of reliability deterioration over the life cycle of an engineering system. To reduce the high dimensionality, a time-independent reliability model is developed by translating random processes and time parameters into random parameters in order to equivalently cover all potential failures that may occur during the time interval of interest. With the time-independent reliability model, an instantaneous failure surface is attained by using a Kriging-based surrogate model to identify all potential failure events. To enhance the efficacy of failure surface identification, a maximum confidence enhancement method is utilized to update the Kriging model sequentially. Then, the time-variant reliability is approximated using Monte Carlo simulations of the Kriging model where system failures over a time interval are predicted by the instantaneous failure surface. The results of two case studies demonstrate that the proposed approach is able to accurately predict the time evolution of system reliability while requiring much less computational efforts compared with the existing analytical approach. - Highlights: • Developed a new approach for time-variant reliability analysis. • Proposed a novel stochastic process transformation procedure to reduce the dimensionality. • Employed Kriging models with confidence-based adaptive sampling scheme to enhance computational efficiency. • The approach is effective for handling random process in time-variant reliability analysis. • Two case studies are used to demonstrate the efficacy

  15. Polyhedral patterns

    KAUST Repository

    Jiang, Caigui

    2015-10-27

    We study the design and optimization of polyhedral patterns, which are patterns of planar polygonal faces on freeform surfaces. Working with polyhedral patterns is desirable in architectural geometry and industrial design. However, the classical tiling patterns on the plane must take on various shapes in order to faithfully and feasibly approximate curved surfaces. We define and analyze the deformations these tiles must undertake to account for curvature, and discover the symmetries that remain invariant under such deformations. We propose a novel method to regularize polyhedral patterns while maintaining these symmetries into a plethora of aesthetic and feasible patterns.

  16. Reliability concepts applied to cutting tool change time

    Energy Technology Data Exchange (ETDEWEB)

    Patino Rodriguez, Carmen Elena, E-mail: cpatino@udea.edu.c [Department of Industrial Engineering, University of Antioquia, Medellin (Colombia); Department of Mechatronics and Mechanical Systems, Polytechnic School, University of Sao Paulo, Sao Paulo (Brazil); Francisco Martha de Souza, Gilberto [Department of Mechatronics and Mechanical Systems, Polytechnic School, University of Sao Paulo, Sao Paulo (Brazil)

    2010-08-15

    This paper presents a reliability-based analysis for calculating critical tool life in machining processes. It is possible to determine the running time for each tool involved in the process by obtaining the operations sequence for the machining procedure. Usually, the reliability of an operation depends on three independent factors: operator, machine-tool and cutting tool. The reliability of a part manufacturing process is mainly determined by the cutting time for each job and by the sequence of operations, defined by the series configuration. An algorithm is presented to define when the cutting tool must be changed. The proposed algorithm is used to evaluate the reliability of a manufacturing process composed of turning and drilling operations. The reliability of the turning operation is modeled based on data presented in the literature, and from experimental results, a statistical distribution of drilling tool wear was defined, and the reliability of the drilling process was modeled.

  17. Reliability concepts applied to cutting tool change time

    International Nuclear Information System (INIS)

    Patino Rodriguez, Carmen Elena; Francisco Martha de Souza, Gilberto

    2010-01-01

    This paper presents a reliability-based analysis for calculating critical tool life in machining processes. It is possible to determine the running time for each tool involved in the process by obtaining the operations sequence for the machining procedure. Usually, the reliability of an operation depends on three independent factors: operator, machine-tool and cutting tool. The reliability of a part manufacturing process is mainly determined by the cutting time for each job and by the sequence of operations, defined by the series configuration. An algorithm is presented to define when the cutting tool must be changed. The proposed algorithm is used to evaluate the reliability of a manufacturing process composed of turning and drilling operations. The reliability of the turning operation is modeled based on data presented in the literature, and from experimental results, a statistical distribution of drilling tool wear was defined, and the reliability of the drilling process was modeled.

  18. Reliability and comparative validity of a Diet Quality Index for assessing dietary patterns of preschool-aged children in Sydney, Australia.

    Science.gov (United States)

    Kunaratnam, Kanita; Halaki, Mark; Wen, Li Ming; Baur, Louise A; Flood, Victoria M

    2018-03-01

    To report on the reliability and validity of a Diet Quality Index (DQI) to assess preschoolers dietary patterns using a short food frequency questionnaire (sFFQ) and 3-day food records (3d-FR). Seventy-seven preschool carers/parents completed a telephone interview on preschoolers (2-5-year olds) dietary habits in metropolitan Sydney. Agreement in scores was assessed using intraclass correlation (ICC) and paired t-tests for repeated sFFQ-DQI scores and Bland-Altman methods and paired t-tests for sFFQ-DQI and 3d-FR-DQI scores. Mean-total sFFQ-DQI ICC scores was high = 0.89, 95% CI (0.81, 0.93). There was weak agreement between sFFQ-DQI and 3d-FR-DQI scores (r = 0.36, p < 0.01). The 3d-FR-DQI scores were positively associated with carbohydrate, folate, ß-carotene, magnesium, calcium, protein, total fat and negatively associated with sugar, starch, niacin, vitamin C, phosphorus, polyunsaturated fat, and monounsaturated fat. The sFFQ-DQI demonstrated good reliability but weak validity. Associations between nutrients and 3d-FR-DQI scores indicate promising usability and warrants further investigation. Further research is needed to establish its validity in accurately scoring children's diet quality using sFFQ compared to 3d-FR before the tool can be implemented for use in population settings.

  19. Non-utility generation and demand management reliability of customer delivery systems

    International Nuclear Information System (INIS)

    Hamoud, G.A.; Wang, L.

    1995-01-01

    A probabilistic methodology for evaluating the impact of non-utility generation (NUG) and demand management programs (DMP) on supply reliability of customer delivery systems was presented. The proposed method was based on the criteria that the supply reliability to the customers on the delivery system should not be affected by the integration of either NUG or DMPs. The method considered station load profile, load forecast, and uncertainty in size and availability of the nuio. Impacts on system reliability were expressed in terms of possible delays of the in-service date for new facilities or in terms of an increase in the system load carrying capability. Examples to illustrate the proposed methodology were provided. 10 refs., 8 tabs., 2 figs

  20. Swarm of bees and particles algorithms in the problem of gradual failure reliability assurance

    Directory of Open Access Journals (Sweden)

    M. F. Anop

    2015-01-01

    Full Text Available Probability-statistical framework of reliability theory uses models based on the chance failures analysis. These models are not functional and do not reflect relation of reliability characteristics to the object performance. At the same time, a significant part of the technical systems failures are gradual failures caused by degradation of the internal parameters of the system under the influence of various external factors.The paper shows how to provide the required level of reliability at the design stage using a functional model of a technical object. Paper describes the method for solving this problem under incomplete initial information, when there is no information about the patterns of technological deviations and degradation parameters, and the considered system model is a \\black box" one.To this end, we formulate the problem of optimal parametric synthesis. It lies in the choice of the nominal values of the system parameters to satisfy the requirements for its operation and take into account the unavoidable deviations of the parameters from their design values during operation. As an optimization criterion in this case we propose to use a deterministic geometric criterion \\reliability reserve", which is the minimum distance measured along the coordinate directions from the nominal parameter value to the acceptability region boundary rather than statistical values.The paper presents the results of the application of heuristic swarm intelligence methods to solve the formulated optimization problem. Efficiency of particle swarm algorithms and swarm of bees one compared with undirected random search algorithm in solving a number of test optimal parametric synthesis problems in three areas: reliability, convergence rate and operating time. The study suggests that the use of a swarm of bees method for solving the problem of the technical systems gradual failure reliability ensuring is preferred because of the greater flexibility of the

  1. An Intuitionistic Fuzzy Methodology for Component-Based Software Reliability Optimization

    DEFF Research Database (Denmark)

    Madsen, Henrik; Grigore, Albeanu; Popenţiuvlǎdicescu, Florin

    2012-01-01

    Component-based software development is the current methodology facilitating agility in project management, software reuse in design and implementation, promoting quality and productivity, and increasing the reliability and performability. This paper illustrates the usage of intuitionistic fuzzy...... degree approach in modelling the quality of entities in imprecise software reliability computing in order to optimize management results. Intuitionistic fuzzy optimization algorithms are proposed to be used for complex software systems reliability optimization under various constraints....

  2. Factorization of Constrained Energy K-Network Reliability with Perfect Nodes

    OpenAIRE

    Burgos, Juan Manuel

    2013-01-01

    This paper proves a new general K-network constrained energy reliability global factorization theorem. As in the unconstrained case, beside its theoretical mathematical importance the theorem shows how to do parallel processing in exact network constrained energy reliability calculations in order to reduce the processing time of this NP-hard problem. Followed by a new simple factorization formula for its calculation, we propose a new definition of constrained energy network reliability motiva...

  3. Revisiting radiation patterns in e+e- collisions

    International Nuclear Information System (INIS)

    Fischer, N.; Gieseke, S.

    2014-02-01

    We propose four simple event-shape variables for semi-inclusive e + e - →4-jet events. The observables and cuts are designed to be especially sensitive to subleading aspects of the event structure, and allow to test the reliability of phenomenological QCD models in greater detail. Three of them, θ 14 , θ * , and C (1/5) 2 , focus on soft emissions off three-jet topologies with a small opening angle, for which coherence effects beyond the leading QCD dipole pattern are expected to be enhanced. A complementary variable, M 2 L /M 2 H , measures the ratio of the hemisphere masses in 4-jet events with a compressed scale hierarchy (Durham y 23 ∝y 34 ), for which subleading 1→3 splitting effects are expected to be enhanced. We consider several different parton-shower models, spanning both conventional and dipole/antenna ones, all tuned to the same e + e - reference data, and show that a measurement of the proposed observables would allow for additional significant discriminating power between the models.

  4. Reliability demonstration test planning: A three dimensional consideration

    International Nuclear Information System (INIS)

    Yadav, Om Prakash; Singh, Nanua; Goel, Parveen S.

    2006-01-01

    Increasing customer demand for reliability, fierce market competition on time-to-market and cost, and highly reliable products are making reliability testing more challenging task. This paper presents a systematic approach for identifying critical elements (subsystems and components) of the system and deciding the types of test to be performed to demonstrate reliability. It decomposes the system into three dimensions (i.e. physical, functional and time) and identifies critical elements in the design by allocating system level reliability to each candidate. The decomposition of system level reliability is achieved by using criticality index. The numerical value of criticality index for each candidate is derived based on the information available from failure mode and effects analysis (FMEA) document or warranty data from a prior system. It makes use of this information to develop reliability demonstration test plan for the identified (critical) failure mechanisms and physical elements. It also highlights the benefits of using prior information in order to locate critical spots in the design and in subsequent development of test plans. A case example is presented to demonstrate the proposed approach

  5. A subject-independent pattern-based Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Andreas Markus Ray

    2015-10-01

    Full Text Available While earlier Brain-Computer Interface (BCI studies have mostly focused on modulating specific brain regions or signals, new developments in pattern classification of brain states are enabling real-time decoding and modulation of an entire functional network. The present study proposes a new method for real-time pattern classification and neurofeedback of brain states from electroencephalographic (EEG signals. It involves the creation of a fused classification model based on the method of Common Spatial Patterns (CSPs from data of several healthy individuals. The subject-independent model is then used to classify EEG data in real-time and provide feedback to new individuals. In a series of offline experiments involving training and testing of the classifier with individual data from 27 healthy subjects, a mean classification accuracy of 75.30% was achieved, demonstrating that the classification system at hand can reliably decode two types of imagery used in our experiments, i.e. happy emotional imagery and motor imagery. In a subsequent experiment it is shown that the classifier can be used to provide neurofeedback to new subjects, and that these subjects learn to match their brain pattern to that of the fused classification model in a few days of neurofeedback training. This finding can have important implications for future studies on neurofeedback and its clinical applications on neuropsychiatric disorders.

  6. Robust against route failure using power proficient reliable routing in MANET

    Directory of Open Access Journals (Sweden)

    M. Malathi

    2018-03-01

    Full Text Available The aim of this paper was to propose a novel routing protocol for Mobile Adhoc Network communication which reduces the route failure during transmission. The proposed routing protocol uses 3 salient parameters to discover the path which ensure the reliable communication. The quality of the channel, link quality and energy level of the node are the major reasons for unintentional node failure in mobile Adhoc network. So the proposed routing protocol considers these three parameters to select the best forwarder node in the path. The reliable data communication is achieved by transmitting data via path selected by the proposed routing scheme has been proven using network simulator (NS2. Keywords: Channel quality, Link quality, Mobile Adhoc Network (MANET, Residual energy

  7. Bypassing BDD Construction for Reliability Analysis

    DEFF Research Database (Denmark)

    Williams, Poul Frederick; Nikolskaia, Macha; Rauzy, Antoine

    2000-01-01

    In this note, we propose a Boolean Expression Diagram (BED)-based algorithm to compute the minimal p-cuts of boolean reliability models such as fault trees. BEDs make it possible to bypass the Binary Decision Diagram (BDD) construction, which is the main cost of fault tree assessment....

  8. Evaluating the reliability of multi-body mechanisms: A method considering the uncertainties of dynamic performance

    International Nuclear Information System (INIS)

    Wu, Jianing; Yan, Shaoze; Zuo, Ming J.

    2016-01-01

    Mechanism reliability is defined as the ability of a certain mechanism to maintain output accuracy under specified conditions. Mechanism reliability is generally assessed by the classical direct probability method (DPM) derived from the first order second moment (FOSM) method. The DPM relies strongly on the analytical form of the dynamic solution so it is not applicable to multi-body mechanisms that have only numerical solutions. In this paper, an indirect probability model (IPM) is proposed for mechanism reliability evaluation of multi-body mechanisms. IPM combines the dynamic equation, degradation function and Kaplan–Meier estimator to evaluate mechanism reliability comprehensively. Furthermore, to reduce the amount of computation in practical applications, the IPM is simplified into the indirect probability step model (IPSM). A case study of a crank–slider mechanism with clearance is investigated. Results show that relative errors between the theoretical and experimental results of mechanism reliability are less than 5%, demonstrating the effectiveness of the proposed method. - Highlights: • An indirect probability model (IPM) is proposed for mechanism reliability evaluation. • The dynamic equation, degradation function and Kaplan–Meier estimator are used. • Then the simplified form of indirect probability model is proposed. • The experimental results agree well with the predicted results.

  9. Comparative reliability of cheiloscopy and palatoscopy in human identification

    Directory of Open Access Journals (Sweden)

    Sharma Preeti

    2009-01-01

    Full Text Available Background: Establishing a person′s identity in postmortem scenarios can be a very difficult process. Dental records, fingerprint and DNA comparisons are probably the most common techniques used in this context, allowing fast and reliable identification processes. However, under certain circumstances they cannot always be used; sometimes it is necessary to apply different and less known techniques. In forensic identification, lip prints and palatal rugae patterns can lead us to important information and help in a person′s identification. This study aims to ascertain the use of lip prints and palatal rugae pattern in identification and sex differentiation. Materials and Methods: A total of 100 subjects, 50 males and 50 females were selected from among the students of Subharti Dental College, Meerut. The materials used to record lip prints were lipstick, bond paper, cellophane tape, a brush for applying the lipstick, and a magnifying lens. To study palatal rugae, alginate impressions were taken and the dental casts analyzed for their various patterns. Results: Statistical analysis (applying Z-test for proportion showed significant difference for type I, I′, IV and V lip patterns (P < 0.05 in males and females, while no significant difference was observed for the same in the palatal rugae patterns (P > 0.05. Conclusion: This study not only showed that palatal rugae and lip prints are unique to an individual, but also that lip prints is more reliable for recognition of the sex of an individual.

  10. SensibleSleep: A Bayesian Model for Learning Sleep Patterns from Smartphone Events

    DEFF Research Database (Denmark)

    Cuttone, Andrea; Bækgaard, Per; Sekara, Vedran

    2017-01-01

    We propose a Bayesian model for extracting sleep patterns from smartphone events. Our method is able to identify individuals' daily sleep periods and their evolution over time, and provides an estimation of the probability of sleep and wake transitions. The model is fitted to more than 400...... to quantify uncertainty and encode prior knowledge about sleep patterns. Compared with existing smartphone-based systems, our method requires only screen on/off events, and is therefore much less intrusive in terms of privacy and more battery-efficient....... participants from two different datasets, and we verify the results against ground truth from dedicated armband sleep trackers. We show that the model is able to produce reliable sleep estimates with an accuracy of 0.89, both at the individual and at the collective level. Moreover the Bayesian model is able...

  11. SensibleSleep: A Bayesian Model for Learning Sleep Patterns from Smartphone Events.

    Science.gov (United States)

    Cuttone, Andrea; Bækgaard, Per; Sekara, Vedran; Jonsson, Håkan; Larsen, Jakob Eg; Lehmann, Sune

    2017-01-01

    We propose a Bayesian model for extracting sleep patterns from smartphone events. Our method is able to identify individuals' daily sleep periods and their evolution over time, and provides an estimation of the probability of sleep and wake transitions. The model is fitted to more than 400 participants from two different datasets, and we verify the results against ground truth from dedicated armband sleep trackers. We show that the model is able to produce reliable sleep estimates with an accuracy of 0.89, both at the individual and at the collective level. Moreover the Bayesian model is able to quantify uncertainty and encode prior knowledge about sleep patterns. Compared with existing smartphone-based systems, our method requires only screen on/off events, and is therefore much less intrusive in terms of privacy and more battery-efficient.

  12. SensibleSleep: A Bayesian Model for Learning Sleep Patterns from Smartphone Events.

    Directory of Open Access Journals (Sweden)

    Andrea Cuttone

    Full Text Available We propose a Bayesian model for extracting sleep patterns from smartphone events. Our method is able to identify individuals' daily sleep periods and their evolution over time, and provides an estimation of the probability of sleep and wake transitions. The model is fitted to more than 400 participants from two different datasets, and we verify the results against ground truth from dedicated armband sleep trackers. We show that the model is able to produce reliable sleep estimates with an accuracy of 0.89, both at the individual and at the collective level. Moreover the Bayesian model is able to quantify uncertainty and encode prior knowledge about sleep patterns. Compared with existing smartphone-based systems, our method requires only screen on/off events, and is therefore much less intrusive in terms of privacy and more battery-efficient.

  13. Reliability analysis in interdependent smart grid systems

    Science.gov (United States)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  14. Implementation of PATREC nuclear reliability program in PROLOG

    International Nuclear Information System (INIS)

    Koen, B.V.; Koen, D.B.

    1987-01-01

    PROLOG, the de facto computer language for research in artificial intelligence in Japan, is a logical choice for research in the pattern recognition strategy for evaluating the reliability of complex systems expressed as fault trees. PROLOG's basic data type is the tree, and its basic control construct is pattern matching. It is also based on recursive programming and allows dynamic allocation of memory, both of which are essential for an efficient reduction of the input tree. Since the inference engine of PROLOG automatically examines the user-defined data base in a systematic order, an additional advantage of this language is that the largest known pattern will always be found first without coding complex tree searches of the pattern library as was required in other computer languages such as PL/1 and LISP

  15. Strategic bidding of generating units in competitive electricity market with considering their reliability

    International Nuclear Information System (INIS)

    Soleymani, S.; Ranjbar, A.M.; Shirani, A.R.

    2008-01-01

    In the restructured power systems, they are typically scheduled based on the offers and bids to buy and sell energy and ancillary services (AS) subject to operational and security constraints. Generally, no account is taken of unit reliability when scheduling it. Therefore generating units have no incentive to improve their reliability. This paper proposes a new method to obtain the equilibrium points for reliability and price bidding strategy of units when the unit reliability is considered in the scheduling problem. The proposed methodology employs the supply function equilibrium (SFE) for modeling a unit's bidding strategy. Units change their bidding strategies and improve their reliability until Nash equilibrium points are obtained. GAMS (general algebraic modeling system) language has been used to solve the market scheduling problem using DICOPT optimization software with mixed integer non-linear programming. (author)

  16. Caching Patterns and Implementation

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2006-01-01

    Full Text Available Repetitious access to remote resources, usually data, constitutes a bottleneck for many software systems. Caching is a technique that can drastically improve the performance of any database application, by avoiding multiple read operations for the same data. This paper addresses the caching problems from a pattern perspective. Both Caching and caching strategies, like primed and on demand, are presented as patterns and a pattern-based flexible caching implementation is proposed.The Caching pattern provides method of expensive resources reacquisition circumvention. Primed Cache pattern is applied in situations in which the set of required resources, or at least a part of it, can be predicted, while Demand Cache pattern is applied whenever the resources set required cannot be predicted or is unfeasible to be buffered.The advantages and disadvantages of all the caching patterns presented are also discussed, and the lessons learned are applied in the implementation of the pattern-based flexible caching solution proposed.

  17. The reliability of nuclear power plant safety systems

    International Nuclear Information System (INIS)

    Susnik, J.

    1978-01-01

    A criterion was established concerning the protection that nuclear power plant (NPP) safety systems should afford. An estimate of the necessary or adequate reliability of the total complex of safety systems was derived. The acceptable unreliability of auxiliary safety systems is given, provided the reliability built into the specific NPP safety systems (ECCS, Containment) is to be fully utilized. A criterion for the acceptable unreliability of safety (sub)systems which occur in minimum cut sets having three or more components of the analysed fault tree was proposed. A set of input MTBF or MTTF values which fulfil all the set criteria and attain the appropriate overall reliability was derived. The sensitivity of results to input reliability data values was estimated. Numerical reliability evaluations were evaluated by the programs POTI, KOMBI and particularly URSULA, the last being based on Vesely's kinetic fault tree theory. (author)

  18. An AUTONOMOUS STAR IDENTIFICATION ALGORITHM BASED ON THE DIRECTED CIRCULARITY PATTERN

    Directory of Open Access Journals (Sweden)

    J. Xie

    2012-07-01

    Full Text Available The accuracy of the angular distance may decrease due to lots of factors, such as the parameters of the stellar camera aren't calibrated on-orbit, or the location accuracy of the star image points is low, and so on, which can cause the low success rates of star identification. A robust directed circularity pattern algorithm is proposed in this paper, which is developed on basis of the matching probability algorithm. The improved algorithm retains the matching probability strategy to identify master star, and constructs a directed circularity pattern with the adjacent stars for unitary matching. The candidate matching group which has the longest chain will be selected as the final result. Simulation experiments indicate that the improved algorithm has high successful identification and reliability etc, compared with the original algorithm. The experiments with real data are used to verify it.

  19. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  20. An approach for assessing human decision reliability

    International Nuclear Information System (INIS)

    Pyy, P.

    2000-01-01

    This paper presents a method to study human reliability in decision situations related to nuclear power plant disturbances. Decisions often play a significant role in handling of emergency situations. The method may be applied to probabilistic safety assessments (PSAs) in cases where decision making is an important dimension of an accident sequence. Such situations are frequent e.g. in accident management. In this paper, a modelling approach for decision reliability studies is first proposed. Then, a case study with two decision situations with relatively different characteristics is presented. Qualitative and quantitative findings of the study are discussed. In very simple decision cases with time pressure, time reliability correlation proved out to be a feasible reliability modelling method. In all other decision situations, more advanced probabilistic decision models have to be used. Finally, decision probability assessment by using simulator run results and expert judgement is presented

  1. Uncertainty propagation and sensitivity analysis in system reliability assessment via unscented transformation

    International Nuclear Information System (INIS)

    Rocco Sanseverino, Claudio M.; Ramirez-Marquez, José Emmanuel

    2014-01-01

    The reliability of a system, notwithstanding it intended function, can be significantly affected by the uncertainty in the reliability estimate of the components that define the system. This paper implements the Unscented Transformation to quantify the effects of the uncertainty of component reliability through two approaches. The first approach is based on the concept of uncertainty propagation, which is the assessment of the effect that the variability of the component reliabilities produces on the variance of the system reliability. This assessment based on UT has been previously considered in the literature but only for system represented through series/parallel configuration. In this paper the assessment is extended to systems whose reliability cannot be represented through analytical expressions and require, for example, Monte Carlo Simulation. The second approach consists on the evaluation of the importance of components, i.e., the evaluation of the components that most contribute to the variance of the system reliability. An extension of the UT is proposed to evaluate the so called “main effects” of each component, as well to assess high order component interaction. Several examples with excellent results illustrate the proposed approach. - Highlights: • Simulation based approach for computing reliability estimates. • Computation of reliability variance via 2n+1 points. • Immediate computation of component importance. • Application to network systems

  2. Specialization Patterns

    OpenAIRE

    Schultz , Ulrik Pagh; Lawall , Julia ,; Consel , Charles

    1999-01-01

    Design patterns offer numerous advantages for software development, but can introduce inefficiency into the finished program. Program specialization can eliminate such overheads, but is most effective when targeted by the user to specific bottlenecks. Consequently, we propose to consider program specialization and design patterns as complementary concepts. On the one hand, program specialization can optimize object-oriented programs written using design patterns. On the other hand, design pat...

  3. Analysis of information security reliability: A tutorial

    International Nuclear Information System (INIS)

    Kondakci, Suleyman

    2015-01-01

    This article presents a concise reliability analysis of network security abstracted from stochastic modeling, reliability, and queuing theories. Network security analysis is composed of threats, their impacts, and recovery of the failed systems. A unique framework with a collection of the key reliability models is presented here to guide the determination of the system reliability based on the strength of malicious acts and performance of the recovery processes. A unique model, called Attack-obstacle model, is also proposed here for analyzing systems with immunity growth features. Most computer science curricula do not contain courses in reliability modeling applicable to different areas of computer engineering. Hence, the topic of reliability analysis is often too diffuse to most computer engineers and researchers dealing with network security. This work is thus aimed at shedding some light on this issue, which can be useful in identifying models, their assumptions and practical parameters for estimating the reliability of threatened systems and for assessing the performance of recovery facilities. It can also be useful for the classification of processes and states regarding the reliability of information systems. Systems with stochastic behaviors undergoing queue operations and random state transitions can also benefit from the approaches presented here. - Highlights: • A concise survey and tutorial in model-based reliability analysis applicable to information security. • A framework of key modeling approaches for assessing reliability of networked systems. • The framework facilitates quantitative risk assessment tasks guided by stochastic modeling and queuing theory. • Evaluation of approaches and models for modeling threats, failures, impacts, and recovery analysis of information systems

  4. Column Grid Array Rework for High Reliability

    Science.gov (United States)

    Mehta, Atul C.; Bodie, Charles C.

    2008-01-01

    Due to requirements for reduced size and weight, use of grid array packages in space applications has become common place. To meet the requirement of high reliability and high number of I/Os, ceramic column grid array packages (CCGA) were selected for major electronic components used in next MARS Rover mission (specifically high density Field Programmable Gate Arrays). ABSTRACT The probability of removal and replacement of these devices on the actual flight printed wiring board assemblies is deemed to be very high because of last minute discoveries in final test which will dictate changes in the firmware. The questions and challenges presented to the manufacturing organizations engaged in the production of high reliability electronic assemblies are, Is the reliability of the PWBA adversely affected by rework (removal and replacement) of the CGA package? and How many times can we rework the same board without destroying a pad or degrading the lifetime of the assembly? To answer these questions, the most complex printed wiring board assembly used by the project was chosen to be used as the test vehicle, the PWB was modified to provide a daisy chain pattern, and a number of bare PWB s were acquired to this modified design. Non-functional 624 pin CGA packages with internal daisy chained matching the pattern on the PWB were procured. The combination of the modified PWB and the daisy chained packages enables continuity measurements of every soldered contact during subsequent testing and thermal cycling. Several test vehicles boards were assembled, reworked and then thermal cycled to assess the reliability of the solder joints and board material including pads and traces near the CGA. The details of rework process and results of thermal cycling are presented in this paper.

  5. Reliability Analysis of Wireless Sensor Networks Using Markovian Model

    Directory of Open Access Journals (Sweden)

    Jin Zhu

    2012-01-01

    Full Text Available This paper investigates reliability analysis of wireless sensor networks whose topology is switching among possible connections which are governed by a Markovian chain. We give the quantized relations between network topology, data acquisition rate, nodes' calculation ability, and network reliability. By applying Lyapunov method, sufficient conditions of network reliability are proposed for such topology switching networks with constant or varying data acquisition rate. With the conditions satisfied, the quantity of data transported over wireless network node will not exceed node capacity such that reliability is ensured. Our theoretical work helps to provide a deeper understanding of real-world wireless sensor networks, which may find its application in the fields of network design and topology control.

  6. Stochastic Differential Equation-Based Flexible Software Reliability Growth Model

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.

  7. Verification of Triple Modular Redundancy Insertion for Reliable and Trusted Systems

    Science.gov (United States)

    Berg, Melanie; LaBel, Kenneth

    2016-01-01

    If a system is required to be protected using triple modular redundancy (TMR), improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process and the complexity of digital designs, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems.

  8. An approach for assessing ALWR passive safety system reliability

    International Nuclear Information System (INIS)

    Hake, T.M.

    1991-01-01

    Many of the advanced light water reactor (ALWR) concepts proposed for the next generation of nuclear power plants rely on passive rather than active systems to perform safety functions. Despite the reduced redundancy of the passive systems as compared to active systems in current plants, the assertion is that the overall safety of the plant is enhanced due to the much higher expected reliability of the passive systems. In order to investigate this assertion, a study is being conducted at Sandia National Laboratories to evaluate the reliability of ALWR passive safety features in the context of probabilistic risk assessment (PRA). The purpose of this paper is to provide a brief overview of the approach to this study. The quantification of passive system reliability is not as straightforward as for active systems, due to the lack of operating experience, and to the greater uncertainty in the governing physical phenomena. Thus, the adequacy of current methods for evaluating system reliability must be assessed, and alternatives proposed if necessary. For this study, the Westinghouse Advanced Passive 600 MWe reactor (AP600) was chosen as the advanced reactor for analysis, because of the availability of AP600 design information. This study compares the reliability of AP600 emergency cooling system with that of corresponding systems in a current generation reactor

  9. A hybrid algorithm for reliability analysis combining Kriging and subset simulation importance sampling

    International Nuclear Information System (INIS)

    Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin; Wang, Shuang

    2015-01-01

    To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.

  10. A hybrid algorithm for reliability analysis combining Kriging and subset simulation importance sampling

    Energy Technology Data Exchange (ETDEWEB)

    Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin [Northeastern University, Shenyang (China); Wang, Shuang [Jiangxi University of Science and Technology, Ganzhou (China)

    2015-08-15

    To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.

  11. Reliability-oriented multi-resource allocation in a stochastic-flow network

    International Nuclear Information System (INIS)

    Hsieh, C.-C.; Lin, M.-H.

    2003-01-01

    A stochastic-flow network consists of a set of nodes, including source nodes which supply various resources and sink nodes at which resource demands take place, and a collection of arcs whose capacities have multiple operational states. The network reliability of such a stochastic-flow network is the probability that resources can be successfully transmitted from source nodes through multi-capacitated arcs to sink nodes. Although the evaluation schemes of network reliability in stochastic-flow networks have been extensively studied in the literature, how to allocate various resources at source nodes in a reliable means remains unanswered. In this study, a resource allocation problem in a stochastic-flow network is formulated that aims to determine the optimal resource allocation policy at source nodes subject to given resource demands at sink nodes such that the network reliability of the stochastic-flow network is maximized, and an algorithm for computing the optimal resource allocation is proposed that incorporates the principle of minimal path vectors. A numerical example is given to illustrate the proposed algorithm

  12. Improving Power Converter Reliability

    DEFF Research Database (Denmark)

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon

    2014-01-01

    of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental......The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...

  13. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. For certain types of products, Storage reliability may not always be 100% at the beginning of storage, unlike the operational reliability, which exist possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new integrated technique, the non-parametric measure based on the E-Bayesian estimates of current failure probabilities is combined with the parametric measure based on the exponential reliability function, is proposed to estimate and predict the storage reliability of products with possible initial failures, where the non-parametric method is used to estimate the number of failed products and the reliability at each testing time, and the parameter method is used to estimate the initial reliability and the failure rate of storage product. The proposed method has taken into consideration that, the reliability test data of storage products containing the unexamined before and during the storage process, is available for providing more accurate estimates of both the initial failure probability and the storage failure probability. When storage reliability prediction that is the main concern in this field should be made, the non-parametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is presented in this paper. Finally, a numerical example is given to illustrate the method. Furthermore, a detailed comparison between the proposed and traditional method, for examining the rationality of assessment and prediction on the storage reliability, is investigated. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality. - Highlights:

  14. A hybrid firefly algorithm and pattern search technique for SSSC based power oscillation damping controller design

    Directory of Open Access Journals (Sweden)

    Srikanta Mahapatra

    2014-12-01

    Full Text Available In this paper, a novel hybrid Firefly Algorithm and Pattern Search (h-FAPS technique is proposed for a Static Synchronous Series Compensator (SSSC-based power oscillation damping controller design. The proposed h-FAPS technique takes the advantage of global search capability of FA and local search facility of PS. In order to tackle the drawback of using the remote signal that may impact reliability of the controller, a modified signal equivalent to the remote speed deviation signal is constructed from the local measurements. The performances of the proposed controllers are evaluated in SMIB and multi-machine power system subjected to various transient disturbances. To show the effectiveness and robustness of the proposed design approach, simulation results are presented and compared with some recently published approaches such as Differential Evolution (DE and Particle Swarm Optimization (PSO. It is observed that the proposed approach yield superior damping performance compared to some recently reported approaches.

  15. Cost benefit justification of nuclear plant reliability improvement

    International Nuclear Information System (INIS)

    El-Sayed, M.A.H.; Abdelmonem, N.M.

    1985-01-01

    The design of the secondary steam loop of the nuclear power plant has a significant effect on the reliability of the plant. Moreover, the necessity to cool a reactor safely has increased the reliability demanded from the system. The rapidly rising construction costs and fuel prices in recent years have stimulated a great deal in optimizing the productivity of a nuclear power plant through reliability improvement of the secondary steamloop and the reactor cooling system. A method for evaluating the reliability of steam loop and cooling system of a nuclear power plant is presented. The method utilizes the cut-set technique. The developed method can be easily used to show to what extent the overall reliability of the nuclear plant is affected by the possible failures in the steam and cooling subsystem. A model for calculating the increase in the nuclear plant productivity resulting from a proposed improvement in the two subsystems reliability is discussed. The model takes into account the capital cost of spare parts for several components, replacement energy, operating and maintenance costs

  16. Reliability in endoscopic diagnosis of portal hypertensive gastropathy

    Science.gov (United States)

    de Macedo, George Fred Soares; Ferreira, Fabio Gonçalves; Ribeiro, Maurício Alves; Szutan, Luiz Arnaldo; Assef, Mauricio Saab; Rossini, Lucio Giovanni Battista

    2013-01-01

    AIM: To analyze reliability among endoscopists in diagnosing portal hypertensive gastropathy (PHG) and to determine which criteria from the most utilized classifications are the most suitable. METHODS: From January to July 2009, in an academic quaternary referral center at Santa Casa of São Paulo Endoscopy Service, Brazil, we performed this single-center prospective study. In this period, we included 100 patients, including 50 sequential patients who had portal hypertension of various etiologies; who were previously diagnosed based on clinical, laboratory and imaging exams; and who presented with esophageal varices. In addition, our study included 50 sequential patients who had dyspeptic symptoms and were referred for upper digestive endoscopy without portal hypertension. All subjects underwent upper digestive endoscopy, and the images of the exam were digitally recorded. Five endoscopists with more than 15 years of experience answered an electronic questionnaire, which included endoscopic criteria from the 3 most commonly used Portal Hypertensive Gastropathy classifications (McCormack, NIEC and Baveno) and the presence of elevated or flat antral erosive gastritis. All five endoscopists were blinded to the patients’ clinical information, and all images of varices were deliberately excluded for the analysis. RESULTS: The three most common etiologies of portal hypertension were schistosomiasis (36%), alcoholic cirrhosis (20%) and viral cirrhosis (14%). Of the 50 patients with portal hypertension, 84% were Child A, 12% were Child B, 4% were Child C, 64% exhibited previous variceal bleeding and 66% were previously endoscopic treated. The endoscopic parameters, presence or absence of mosaic-like pattern, red point lesions and cherry-red spots were associated with high inter-observer reliability and high specificity for diagnosing Portal Hypertensive Gastropathy. Sensitivity, specificity and reliability for the diagnosis of PHG (%) were as follows: mosaic-like pattern

  17. Analysis of NPP protection structure reliability under impact of a falling aircraft

    International Nuclear Information System (INIS)

    Shul'man, G.S.

    1996-01-01

    Methodology for evaluation of NPP protection structure reliability by impact of aircraft fall down is considered. The methodology is base on the probabilistic analysis of all potential events. The problem is solved in three stages: determination of loads on structural units, calculation of local reliability of protection structures by assigned loads and estimation of the structure reliability. The methodology proposed may be applied at the NPP design stage and by determination of reliability of already available structures

  18. Post-decomposition optimizations using pattern matching and rule-based clustering for multi-patterning technology

    Science.gov (United States)

    Wang, Lynn T.-N.; Madhavan, Sriram

    2018-03-01

    A pattern matching and rule-based polygon clustering methodology with DFM scoring is proposed to detect decomposition-induced manufacturability detractors and fix the layout designs prior to manufacturing. A pattern matcher scans the layout for pre-characterized patterns from a library. If a pattern were detected, rule-based clustering identifies the neighboring polygons that interact with those captured by the pattern. Then, DFM scores are computed for the possible layout fixes: the fix with the best score is applied. The proposed methodology was applied to two 20nm products with a chip area of 11 mm2 on the metal 2 layer. All the hotspots were resolved. The number of DFM spacing violations decreased by 7-15%.

  19. Reliability Evaluation of Distribution System Considering Sequential Characteristics of Distributed Generation

    Directory of Open Access Journals (Sweden)

    Sheng Wanxing

    2016-01-01

    Full Text Available In allusion to the randomness of output power of distributed generation (DG, a reliability evaluation model based on sequential Monte Carlo simulation (SMCS for distribution system with DG is proposed. Operating states of the distribution system can be sampled by SMCS in chronological order thus the corresponding output power of DG can be generated. The proposed method has been tested on feeder F4 of IEEE-RBTS Bus 6. The results show that reliability evaluation of distribution system considering the uncertainty of output power of DG can be effectively implemented by SMCS.

  20. Dynamic reliability of digital-based transmitters

    Energy Technology Data Exchange (ETDEWEB)

    Brissaud, Florent, E-mail: florent.brissaud.2007@utt.f [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France) and Universite de Technologie de Troyes - UTT, Institut Charles Delaunay - ICD and UMR CNRS 6279 STMR, 12 rue Marie Curie, BP 2060, 10010 Troyes Cedex (France); Smidts, Carol [Ohio State University (OSU), Nuclear Engineering Program, Department of Mechanical Engineering, Scott Laboratory, 201 W 19th Ave, Columbus OH 43210 (United States); Barros, Anne; Berenguer, Christophe [Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and UMR CNRS 6279 STMR, 12 rue Marie Curie, BP 2060, 10010 Troyes Cedex (France)

    2011-07-15

    Dynamic reliability explicitly handles the interactions between the stochastic behaviour of system components and the deterministic behaviour of process variables. While dynamic reliability provides a more efficient and realistic way to perform probabilistic risk assessment than 'static' approaches, its industrial level applications are still limited. Factors contributing to this situation are the inherent complexity of the theory and the lack of a generic platform. More recently the increased use of digital-based systems has also introduced additional modelling challenges related to specific interactions between system components. Typical examples are the 'intelligent transmitters' which are able to exchange information, and to perform internal data processing and advanced functionalities. To make a contribution to solving these challenges, the mathematical framework of dynamic reliability is extended to handle the data and information which are processed and exchanged between systems components. Stochastic deviations that may affect system properties are also introduced to enhance the modelling of failures. A formalized Petri net approach is then presented to perform the corresponding reliability analyses using numerical methods. Following this formalism, a versatile model for the dynamic reliability modelling of digital-based transmitters is proposed. Finally the framework's flexibility and effectiveness is demonstrated on a substantial case study involving a simplified model of a nuclear fast reactor.

  1. Binge Eating Disorder: Reliability and Validity of a New Diagnostic Category.

    Science.gov (United States)

    Brody, Michelle L.; And Others

    1994-01-01

    Examined reliability and validity of binge eating disorder (BED), proposed for inclusion in Diagnostic and Statistical Manual of Mental Disorders (DSM), fourth edition. Interrater reliability of BED diagnosis compared favorably with that of most diagnoses in DSM revised third edition. Study comparing obese individuals with and without BED and…

  2. MOV reliability evaluation and periodic verification scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Bunte, B.D.

    1996-12-01

    The purpose of this paper is to establish a periodic verification testing schedule based on the expected long term reliability of gate or globe motor operated valves (MOVs). The methodology in this position paper determines the nominal (best estimate) design margin for any MOV based on the best available information pertaining to the MOVs design requirements, design parameters, existing hardware design, and present setup. The uncertainty in this margin is then determined using statistical means. By comparing the nominal margin to the uncertainty, the reliability of the MOV is estimated. The methodology is appropriate for evaluating the reliability of MOVs in the GL 89-10 program. It may be used following periodic testing to evaluate and trend MOV performance and reliability. It may also be used to evaluate the impact of proposed modifications and maintenance activities such as packing adjustments. In addition, it may be used to assess the impact of new information of a generic nature which impacts safety related MOVs.

  3. MOV reliability evaluation and periodic verification scheduling

    International Nuclear Information System (INIS)

    Bunte, B.D.

    1996-01-01

    The purpose of this paper is to establish a periodic verification testing schedule based on the expected long term reliability of gate or globe motor operated valves (MOVs). The methodology in this position paper determines the nominal (best estimate) design margin for any MOV based on the best available information pertaining to the MOVs design requirements, design parameters, existing hardware design, and present setup. The uncertainty in this margin is then determined using statistical means. By comparing the nominal margin to the uncertainty, the reliability of the MOV is estimated. The methodology is appropriate for evaluating the reliability of MOVs in the GL 89-10 program. It may be used following periodic testing to evaluate and trend MOV performance and reliability. It may also be used to evaluate the impact of proposed modifications and maintenance activities such as packing adjustments. In addition, it may be used to assess the impact of new information of a generic nature which impacts safety related MOVs

  4. French power system reliability report 2008

    International Nuclear Information System (INIS)

    Tesseron, J.M.

    2009-06-01

    The reliability of the French power system was fully under control in 2008, despite the power outage in the eastern part of the Provence-Alpes-Cote d'Azur region on November 3, which had been dreaded for several years, since it had not been possible to set up a structurally adequate network. Pursuant to a consultation meeting, the reinforcement solution proposed by RTE was approved by the Minister of Energy, boding well for greater reliability in future. Based on the observations presented in this 2008 Report, RTE's Power System Reliability Audit Mission considers that no new recommendations are needed beyond those expressed in previous reliability reports and during reliability audits. The publication of this yearly report is in keeping with RTE's goal to promote the follow-up over time of the evolution of reliability in its various aspects. RTE thus aims to contribute to the development of reliability culture, by encouraging an improved assessment by the different players (both RTE and network users) of the role they play in building reliability, and by advocating the taking into account of reliability and benchmarking in the European organisations of Transmission System Operators. Contents: 1 - Brief overview of the evolution of the internal and external environment; 2 - Operating situations encountered: climatic conditions, supply / demand balance management, operation of interconnections, management of internal congestion, contingencies affecting the transmission facilities; 3 - Evolution of the reliability reference guide: external reference guide: directives, laws, decrees, etc, ETSO, UCTE, ENTSO-E, contracting contributing to reliability, RTE internal reference guide; 4 - Evolution of measures contributing to reliability in the equipment field: intrinsic performances of components (generating sets, protection systems, operation PLC's, instrumentation and control, automatic frequency and voltage controls, transmission facilities, control systems, load

  5. Reliability Improved Design for a Safety System Channel

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Eung Se; Kim, Yun Goo [KHNP, Daejeon (Korea, Republic of)

    2016-05-15

    Nowadays, these systems are implemented with a same platform type, such as a qualified programmable logic controller (PLC). The platform intensively uses digital communication with fiber-optic links to reduce cabling costs and to achieve effective signal isolation. These communication interface and redundancies within a channel increase the complexness of an overall system design. This paper proposes a simpler channel architecture design to reduce the complexity and to enhance overall channel reliability. Simplified safety channel configuration is proposed and the failure probabilities are compared with baseline safety channel configuration using an estimated generic value. The simplified channel configuration achieves 40 percent failure reduction compare to baseline safety channel configuration. If this configuration can be implemented within a processor module, overall safety channel reliability is increase and costs of fabrication and maintenance will be greatly reduced.

  6. Reliability Improved Design for a Safety System Channel

    International Nuclear Information System (INIS)

    Oh, Eung Se; Kim, Yun Goo

    2016-01-01

    Nowadays, these systems are implemented with a same platform type, such as a qualified programmable logic controller (PLC). The platform intensively uses digital communication with fiber-optic links to reduce cabling costs and to achieve effective signal isolation. These communication interface and redundancies within a channel increase the complexness of an overall system design. This paper proposes a simpler channel architecture design to reduce the complexity and to enhance overall channel reliability. Simplified safety channel configuration is proposed and the failure probabilities are compared with baseline safety channel configuration using an estimated generic value. The simplified channel configuration achieves 40 percent failure reduction compare to baseline safety channel configuration. If this configuration can be implemented within a processor module, overall safety channel reliability is increase and costs of fabrication and maintenance will be greatly reduced

  7. Conformal prediction for reliable machine learning theory, adaptations and applications

    CERN Document Server

    Balasubramanian, Vineeth; Vovk, Vladimir

    2014-01-01

    The conformal predictions framework is a recent development in machine learning that can associate a reliable measure of confidence with a prediction in any real-world pattern recognition application, including risk-sensitive applications such as medical diagnosis, face recognition, and financial risk prediction. Conformal Predictions for Reliable Machine Learning: Theory, Adaptations and Applications captures the basic theory of the framework, demonstrates how to apply it to real-world problems, and presents several adaptations, including active learning, change detection, and anomaly detecti

  8. Dynamic decision-making for reliability and maintenance analysis of manufacturing systems based on failure effects

    Science.gov (United States)

    Zhang, Ding; Zhang, Yingjie

    2017-09-01

    A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.

  9. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    Science.gov (United States)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  10. Reliability in Plantinga´s Account of Epistemic Warrant

    Directory of Open Access Journals (Sweden)

    John C. Wingard, Jr.

    2002-12-01

    Full Text Available In das paper 1 consider the reliability condition in Alvin Platinga’s proper functionalist account of epistemic warrant I begin by reviewing m some detail the features of the reliability condition as Platinga has articulated a From there, 1 consider what is needed to ground or secure the sort of reliability which Plantinga has m mind, and argue that what is needed is a significant causal condition which has generally been overlooked Then, after identifying eight versions of the relevant sort of reliability, I exam me each alternative as to whether as requirement, along with Platinga’s other proposed conditions, would give us a satisfactory account of epistemic warrant I conclude that there is bale to no hope of formulating a reliability condition that would yield a satisfactory analysts of the sort Plantinga desires

  11. Uncertainties and reliability theories for reactor safety

    International Nuclear Information System (INIS)

    Veneziano, D.

    1975-01-01

    What makes the safety problem of nuclear reactors particularly challenging is the demand for high levels of reliability and the limitation of statistical information. The latter is an unfortunate circumstance, which forces deductive theories of reliability to use models and parameter values with weak factual support. The uncertainty about probabilistic models and parameters which are inferred from limited statistical evidence can be quantified and incorporated rationally into inductive theories of reliability. In such theories, the starting point is the information actually available, as opposed to an estimated probabilistic model. But, while the necessity of introducing inductive uncertainty into reliability theories has been recognized by many authors, no satisfactory inductive theory is presently available. The paper presents: a classification of uncertainties and of reliability models for reactor safety; a general methodology to include these uncertainties into reliability analysis; a discussion about the relative advantages and the limitations of various reliability theories (specifically, of inductive and deductive, parametric and nonparametric, second-moment and full-distribution theories). For example, it is shown that second-moment theories, which were originally suggested to cope with the scarcity of data, and which have been proposed recently for the safety analysis of secondary containment vessels, are the least capable of incorporating statistical uncertainty. The focus is on reliability models for external threats (seismic accelerations and tornadoes). As an application example, the effect of statistical uncertainty on seismic risk is studied using parametric full-distribution models

  12. An integrated reliability-based design optimization of offshore towers

    International Nuclear Information System (INIS)

    Karadeniz, Halil; Togan, Vedat; Vrouwenvelder, Ton

    2009-01-01

    After recognizing the uncertainty in the parameters such as material, loading, geometry and so on in contrast with the conventional optimization, the reliability-based design optimization (RBDO) concept has become more meaningful to perform an economical design implementation, which includes a reliability analysis and an optimization algorithm. RBDO procedures include structural analysis, reliability analysis and sensitivity analysis both for optimization and for reliability. The efficiency of the RBDO system depends on the mentioned numerical algorithms. In this work, an integrated algorithms system is proposed to implement the RBDO of the offshore towers, which are subjected to the extreme wave loading. The numerical strategies interacting with each other to fulfill the RBDO of towers are as follows: (a) a structural analysis program, SAPOS, (b) an optimization program, SQP and (c) a reliability analysis program based on FORM. A demonstration of an example tripod tower under the reliability constraints based on limit states of the critical stress, buckling and the natural frequency is presented.

  13. An integrated reliability-based design optimization of offshore towers

    Energy Technology Data Exchange (ETDEWEB)

    Karadeniz, Halil [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands)], E-mail: h.karadeniz@tudelft.nl; Togan, Vedat [Department of Civil Engineering, Karadeniz Technical University, Trabzon (Turkey); Vrouwenvelder, Ton [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands)

    2009-10-15

    After recognizing the uncertainty in the parameters such as material, loading, geometry and so on in contrast with the conventional optimization, the reliability-based design optimization (RBDO) concept has become more meaningful to perform an economical design implementation, which includes a reliability analysis and an optimization algorithm. RBDO procedures include structural analysis, reliability analysis and sensitivity analysis both for optimization and for reliability. The efficiency of the RBDO system depends on the mentioned numerical algorithms. In this work, an integrated algorithms system is proposed to implement the RBDO of the offshore towers, which are subjected to the extreme wave loading. The numerical strategies interacting with each other to fulfill the RBDO of towers are as follows: (a) a structural analysis program, SAPOS, (b) an optimization program, SQP and (c) a reliability analysis program based on FORM. A demonstration of an example tripod tower under the reliability constraints based on limit states of the critical stress, buckling and the natural frequency is presented.

  14. Fast Monte Carlo reliability evaluation using support vector machine

    International Nuclear Information System (INIS)

    Rocco, Claudio M.; Moreno, Jose Ali

    2002-01-01

    This paper deals with the feasibility of using support vector machine (SVM) to build empirical models for use in reliability evaluation. The approach takes advantage of the speed of SVM in the numerous model calculations typically required to perform a Monte Carlo reliability evaluation. The main idea is to develop an estimation algorithm, by training a model on a restricted data set, and replace system performance evaluation by a simpler calculation, which provides reasonably accurate model outputs. The proposed approach is illustrated by several examples. Excellent system reliability results are obtained by training a SVM with a small amount of information

  15. Reliability-Aware Cooperative Node Sleeping and Clustering in Duty-Cycled Sensors Networks

    Directory of Open Access Journals (Sweden)

    Jeungeun Song

    2018-01-01

    Full Text Available Duty-cycled sensor networks provide a new perspective for improvement of energy efficiency and reliability assurance of multi-hop cooperative sensor networks. In this paper, we consider the energy-efficient cooperative node sleeping and clustering problems in cooperative sensor networks where clusters of relay nodes jointly transmit sensory data to the next hop. Our key idea for guaranteeing reliability is to exploit the on-demand number of cooperative nodes, facilitating the prediction of personalized end-to-end (ETE reliability. Namely, a novel reliability-aware cooperative routing (RCR scheme is proposed to select k-cooperative nodes at every hop (RCR-selection. After selecting k cooperative nodes at every hop, all of the non-cooperative nodes will go into sleep status. In order to solve the cooperative node clustering problem, we propose the RCR-based optimal relay assignment and cooperative data delivery (RCR-delivery scheme to provide a low-communication-overhead data transmission and an optimal duty cycle for a given number of cooperative nodes when the network is dynamic, which enables part of cooperative nodes to switch into idle status for further energy saving. Through the extensive OPNET-based simulations, we show that the proposed scheme significantly outperforms the existing geographic routing schemes and beaconless geographic routings in wireless sensor networks with a highly dynamic wireless channel and controls energy consumption, while ETE reliability is effectively guaranteed.

  16. Reliability Estimation for Digital Instrument/Control System

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yaguang; Sydnor, Russell [U.S. Nuclear Regulatory Commission, Washington, D.C. (United States)

    2011-08-15

    Digital instrumentation and controls (DI and C) systems are widely adopted in various industries because of their flexibility and ability to implement various functions that can be used to automatically monitor, analyze, and control complicated systems. It is anticipated that the DI and C will replace the traditional analog instrumentation and controls (AI and C) systems in all future nuclear reactor designs. There is an increasing interest for reliability and risk analyses for safety critical DI and C systems in regulatory organizations, such as The United States Nuclear Regulatory Commission. Developing reliability models and reliability estimation methods for digital reactor control and protection systems will involve every part of the DI and C system, such as sensors, signal conditioning and processing components, transmission lines and digital communication systems, D/A and A/D converters, computer system, signal processing software, control and protection software, power supply system, and actuators. Some of these components are hardware, such as sensors and actuators, their failure mechanisms are well understood, and the traditional reliability model and estimation methods can be directly applied. But many of these components are firmware which has software embedded in the hardware, and software needs special consideration because its failure mechanism is unique, and the reliability estimation method for a software system will be different from the ones used for hardware systems. In this paper, we will propose a reliability estimation method for the entire DI and C system reliability using a recently developed software reliability estimation method and a traditional hardware reliability estimation method.

  17. Reliability Estimation for Digital Instrument/Control System

    International Nuclear Information System (INIS)

    Yang, Yaguang; Sydnor, Russell

    2011-01-01

    Digital instrumentation and controls (DI and C) systems are widely adopted in various industries because of their flexibility and ability to implement various functions that can be used to automatically monitor, analyze, and control complicated systems. It is anticipated that the DI and C will replace the traditional analog instrumentation and controls (AI and C) systems in all future nuclear reactor designs. There is an increasing interest for reliability and risk analyses for safety critical DI and C systems in regulatory organizations, such as The United States Nuclear Regulatory Commission. Developing reliability models and reliability estimation methods for digital reactor control and protection systems will involve every part of the DI and C system, such as sensors, signal conditioning and processing components, transmission lines and digital communication systems, D/A and A/D converters, computer system, signal processing software, control and protection software, power supply system, and actuators. Some of these components are hardware, such as sensors and actuators, their failure mechanisms are well understood, and the traditional reliability model and estimation methods can be directly applied. But many of these components are firmware which has software embedded in the hardware, and software needs special consideration because its failure mechanism is unique, and the reliability estimation method for a software system will be different from the ones used for hardware systems. In this paper, we will propose a reliability estimation method for the entire DI and C system reliability using a recently developed software reliability estimation method and a traditional hardware reliability estimation method

  18. Reliability and risk treatment centered maintenance

    International Nuclear Information System (INIS)

    Pexa, Martin; Hladik, Tomas; Ales, Zdenek; Legat, Vaclav; Muller, Miroslav; Valasek, Petr; Havlu, Vit

    2014-01-01

    We propose a new methodology for application of well-known tools - RCM, RBI and SIF pro - with the aim to treat risks by means of suitable maintenance. The basis of the new methodology is the complex application of all three methods at the same time and not separately as is typical today. The proposed methodology suggests having just one managing team for reliability and risk treatment centred maintenance (RRTCM), employing existing RCM, RBI, and SIFpro tools concurrently. This approach allows for significant reduction of engineering activities' duration. In the proposed methodology these activities are staged into five phases and structured to eliminate all duplication resulting from separate application of the three tools. The newly proposed methodology saves 45% to 50% of the engineering workload and dequate significant financial savings.

  19. Reliability and risk treatment centered maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Pexa, Martin; Hladik, Tomas; Ales, Zdenek; Legat, Vaclav; Muller, Miroslav; Valasek, Petr [Czech University of Life Sciences Prague, Kamycka (Czech Republic); Havlu, Vit [Unipetrol A. S, Prague (Czech Republic)

    2014-10-15

    We propose a new methodology for application of well-known tools - RCM, RBI and SIF pro - with the aim to treat risks by means of suitable maintenance. The basis of the new methodology is the complex application of all three methods at the same time and not separately as is typical today. The proposed methodology suggests having just one managing team for reliability and risk treatment centred maintenance (RRTCM), employing existing RCM, RBI, and SIFpro tools concurrently. This approach allows for significant reduction of engineering activities' duration. In the proposed methodology these activities are staged into five phases and structured to eliminate all duplication resulting from separate application of the three tools. The newly proposed methodology saves 45% to 50% of the engineering workload and dequate significant financial savings.

  20. An Intelligent Method for Structural Reliability Analysis Based on Response Surface

    Institute of Scientific and Technical Information of China (English)

    桂劲松; 刘红; 康海贵

    2004-01-01

    As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surface method is always used because it has a very clear train of thought and simple programming. However, the traditional response surface method fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surface method of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.

  1. Day-to-day reliability of gait characteristics in rats

    DEFF Research Database (Denmark)

    Raffalt, Peter Christian; Nielsen, Louise R; Madsen, Stefan

    2018-01-01

    day-to-day reliability of the gait pattern parameters observed in rats during treadmill walking. The results of the present study may serve as a reference material that can help future intervention studies on rat gait characteristics both with respect to the selection of outcome measures...

  2. QA support for TFTR reliability improvement program in preparation for DT operation

    International Nuclear Information System (INIS)

    Parsells, R.F.; Howard, H.P.

    1987-01-01

    As TFTR approaches experiments in the Q=1 regime, machine reliability becomes a major variable in achieving experimental objectives. This paper describes the methods used to quantify current reliability levels, levels required for D-T operations, proposed methods for reliability growth and improvement, and tracking of reliability performance in that growth. Included in this scope are data collection techniques and short comings, bounding current reliability on the upper end, and requirements for D-T operations. Problem characterization through Pareto diagrams provides insight into recurrent failure modes and the use of Duane plots for charting of reliability changes both cumulative and instantaneous, is explained and demonstrated

  3. Reliability Oriented Circuit Design For Power Electronics Applications

    DEFF Research Database (Denmark)

    Sintamarean, Nicolae Cristian

    is presented. Chapter 3 presents the electro-thermal model validation and the reliability studies performed by the proposed tool. The chapter ends with a detailed lifetime analysis, which emphasizes the mission-profile variation and gate-driver parameters variation impact on the PV-inverter devices lifetime......Highly reliable components are required in order to minimize the downtime during the lifetime of the converter and implicitly the maintenance costs. Therefore, the design of high reliable converters under constrained reliability and cost is a great challenge to be overcome in the future....... Moreover, the impact of the mission-profile sampling time on the lifetime estimation accuracy is also determined. The second part of the thesis introduced in Chapter 4, presents a novel gate-driver concept which reduces the dependency of the device power losses variations on the device loading variations...

  4. Composite reliability evaluation for transmission network planning

    Directory of Open Access Journals (Sweden)

    Jiashen Teh

    2018-01-01

    Full Text Available As the penetration of wind power into the power system increases, the ability to assess the reliability impact of such interaction becomes more important. The composite reliability evaluations involving wind energy provide ample opportunities for assessing the benefits of different wind farm connection points. A connection to the weak area of the transmission network will require network reinforcement for absorbing the additional wind energy. Traditionally, the reinforcements are performed by constructing new transmission corridors. However, a new state-of-art technology such as the dynamic thermal rating (DTR system, provides new reinforcement strategy and this requires new reliability assessment method. This paper demonstrates a methodology for assessing the cost and the reliability of network reinforcement strategies by considering the DTR systems when large scale wind farms are connected to the existing power network. Sequential Monte Carlo simulations were performed and all DTRs and wind speed were simulated using the auto-regressive moving average (ARMA model. Various reinforcement strategies were assessed from their cost and reliability aspects. Practical industrial standards are used as guidelines when assessing costs. Due to this, the proposed methodology in this paper is able to determine the optimal reinforcement strategies when both the cost and reliability requirements are considered.

  5. An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.

    Science.gov (United States)

    Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes

    2017-10-01

    This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice. © 2017 Society for Risk Analysis.

  6. Reliability evaluation of microgrid considering incentive-based demand response

    Science.gov (United States)

    Huang, Ting-Cheng; Zhang, Yong-Jun

    2017-07-01

    Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.

  7. Health monitoring of 90° bolted joints using fuzzy pattern recognition of ultrasonic signals

    International Nuclear Information System (INIS)

    Jalalpour, M; El-Osery, A I; Austin, E M; Reda Taha, M M

    2014-01-01

    Bolted joints are important parts for aerospace structures. However, there is a significant risk associated with assembling bolted joints due to potential human error during the assembly process. Such errors are expensive to find and correct if exposed during environmental testing, yet checking the integrity of individual fasteners after assembly would be a time consuming task. Recent advances in structural health monitoring (SHM) can provide techniques to not only automate this process but also make it reliable. This integrity monitoring requires damage features to be related to physical conditions representing the structural integrity of bolted joints. In this paper an SHM technique using ultrasonic signals and fuzzy pattern recognition to monitor the integrity of 90° bolted joints in aerospace structures is described. The proposed technique is based on normalized fast Fourier transform (NFFT) of transmitted signals and fuzzy pattern recognition. Moreover, experimental observations of a case study on an aluminum 90° bolted joint are presented. We demonstrate the ability of the proposed method to efficiently monitor and indicate bolted joint integrity. (paper)

  8. Qualitative Analysis of the Impact of SOA Patterns on Quality Attributes

    NARCIS (Netherlands)

    Galster, Matthias; Avgeriou, Paris; Tang, A; Muccini, H

    2012-01-01

    Software architecture patterns are proven and reusable solutions to common architecture design problems. One characteristic of architecture patterns is that they affect quality attributes (e.g., performance, reliability). Over the past years, architecture patterns for service-based systems have been

  9. Reliability study: maintenance facilities Portsmouth Gaseous Diffusion Plant

    International Nuclear Information System (INIS)

    Post, B.E.; Sikorski, P.A.; Fankell, R.; Johnson, O.; Ferryman, D.S.; Miller, R.L.; Gearhart, E.C.; Rafferty, M.J.

    1981-08-01

    A reliability study of the maintenance facilities at the Portsmouth Gaseous Diffusion Plant has been completed. The reliability study team analyzed test data and made visual inspections of each component contributing to the overall operation of the facilities. The impacts of facilities and equipment failures were given consideration with regard to personnel safety, protection of government property, health physics, and environmental control. This study revealed that the maintenance facilities are generally in good condition. After evaluating the physical condition and technology status of the major components, the study team made several basic recommendations. Implementation of the recommendations proposed in this report will help assure reliable maintenance of the plant through the year 2000

  10. Reliability-Based Robust Design Optimization of Structures Considering Uncertainty in Design Variables

    Directory of Open Access Journals (Sweden)

    Shujuan Wang

    2015-01-01

    Full Text Available This paper investigates the structural design optimization to cover both the reliability and robustness under uncertainty in design variables. The main objective is to improve the efficiency of the optimization process. To address this problem, a hybrid reliability-based robust design optimization (RRDO method is proposed. Prior to the design optimization, the Sobol sensitivity analysis is used for selecting key design variables and providing response variance as well, resulting in significantly reduced computational complexity. The single-loop algorithm is employed to guarantee the structural reliability, allowing fast optimization process. In the case of robust design, the weighting factor balances the response performance and variance with respect to the uncertainty in design variables. The main contribution of this paper is that the proposed method applies the RRDO strategy with the usage of global approximation and the Sobol sensitivity analysis, leading to the reduced computational cost. A structural example is given to illustrate the performance of the proposed method.

  11. Reliable Actuator for Cryo Propellant Fluid Control, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Cryogenic fluid handling applications require a reliable actuation technology that can handle very low temperatures. A novel EM hammer drive technology is proposed...

  12. Quantifiable and Reliable Structural Health Management Systems, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Major concerns for implementing a practical built-in structural health monitoring system are prediction accuracy and data reliability. It is proposed to develop...

  13. Dynamic reliability assessment and prediction for repairable systems with interval-censored data

    International Nuclear Information System (INIS)

    Peng, Yizhen; Wang, Yu; Zi, YanYang; Tsui, Kwok-Leung; Zhang, Chuhua

    2017-01-01

    The ‘Test, Analyze and Fix’ process is widely applied to improve the reliability of a repairable system. In this process, dynamic reliability assessment for the system has been paid a great deal of attention. Due to instrument malfunctions, staff omissions and imperfect inspection strategies, field reliability data are often subject to interval censoring, making dynamic reliability assessment become a difficult task. Most traditional methods assume this kind of data as multiple normal distributed variables or the missing mechanism as missing at random, which may cause a large bias in parameter estimation. This paper proposes a novel method to evaluate and predict the dynamic reliability of a repairable system subject to interval-censored problem. First, a multiple imputation strategy based on the assumption that the reliability growth trend follows a nonhomogeneous Poisson process is developed to derive the distributions of missing data. Second, a new order statistic model that can transfer the dependent variables into independent variables is developed to simplify the imputation procedure. The unknown parameters of the model are iteratively inferred by the Monte Carlo expectation maximization (MCEM) algorithm. Finally, to verify the effectiveness of the proposed method, a simulation and a real case study for gas pipeline compressor system are implemented. - Highlights: • A new multiple imputation strategy was developed to derive the PDF of missing data. • A new order statistic model was developed to simplify the imputation procedure. • The parameters of the order statistic model were iteratively inferred by MCEM. • A real cases study was conducted to verify the effectiveness of the proposed method.

  14. Reliability of EEG Interactions Differs between Measures and Is Specific for Neurological Diseases

    Directory of Open Access Journals (Sweden)

    Yvonne Höller

    2017-07-01

    Full Text Available Alterations of interaction (connectivity of the EEG reflect pathological processes in patients with neurologic disorders. Nevertheless, it is questionable whether these patterns are reliable over time in different measures of interaction and whether this reliability of the measures is the same across different patient populations. In order to address this topic we examined 22 patients with mild cognitive impairment, five patients with subjective cognitive complaints, six patients with right-lateralized temporal lobe epilepsy, seven patients with left lateralized temporal lobe epilepsy, and 20 healthy controls. We calculated 14 measures of interaction from two EEG-recordings separated by 2 weeks. In order to characterize test-retest reliability, we correlated these measures for each group and compared the correlations between measures and between groups. We found that both measures of interaction as well as groups differed from each other in terms of reliability. The strongest correlation coefficients were found for spectrum, coherence, and full frequency directed transfer function (average rho > 0.9. In the delta (2–4 Hz range, reliability was lower for mild cognitive impairment compared to healthy controls and left lateralized temporal lobe epilepsy. In the beta (13–30 Hz, gamma (31–80 Hz, and high gamma (81–125 Hz frequency ranges we found decreased reliability in subjective cognitive complaints compared to mild cognitive impairment. In the gamma and high gamma range we found increased reliability in left lateralized temporal lobe epilepsy patients compared to healthy controls. Our results emphasize the importance of documenting reliability of measures of interaction, which may vary considerably between measures, but also between patient populations. We suggest that studies claiming clinical usefulness of measures of interaction should provide information on the reliability of the results. In addition, differences between patient

  15. Grant Peer Review: Improving Inter-Rater Reliability with Training.

    Science.gov (United States)

    Sattler, David N; McKnight, Patrick E; Naney, Linda; Mathis, Randy

    2015-01-01

    This study developed and evaluated a brief training program for grant reviewers that aimed to increase inter-rater reliability, rating scale knowledge, and effort to read the grant review criteria. Enhancing reviewer training may improve the reliability and accuracy of research grant proposal scoring and funding recommendations. Seventy-five Public Health professors from U.S. research universities watched the training video we produced and assigned scores to the National Institutes of Health scoring criteria proposal summary descriptions. For both novice and experienced reviewers, the training video increased scoring accuracy (the percentage of scores that reflect the true rating scale values), inter-rater reliability, and the amount of time reading the review criteria compared to the no video condition. The increase in reliability for experienced reviewers is notable because it is commonly assumed that reviewers--especially those with experience--have good understanding of the grant review rating scale. The findings suggest that both experienced and novice reviewers who had not received the type of training developed in this study may not have appropriate understanding of the definitions and meaning for each value of the rating scale and that experienced reviewers may overestimate their knowledge of the rating scale. The results underscore the benefits of and need for specialized peer reviewer training.

  16. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  17. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  18. Influence of ITO patterning on reliability of organic light emitting devices

    International Nuclear Information System (INIS)

    Wang, Zhaokui; Naka, Shigeki; Okada, Hiroyuki

    2009-01-01

    Indium tin oxide (ITO) films are widely used for a transparent electrode of organic light emitting devices (OLEDs) because of its excellent conductivity and transparency. Two types of ITO substrates with different surface roughness were selected to use as anode of OLEDs. In addition, two types of etching process of ITO substrate, particularly the etching time, were also carried out. It was found that the surface roughness and/or the etching process of ITO substrate strongly influenced on an edge of ITO surface, further affected the operating characteristics and reliability of devices.

  19. Incorporating Cyber Layer Failures in Composite Power System Reliability Evaluations

    Directory of Open Access Journals (Sweden)

    Yuqi Han

    2015-08-01

    Full Text Available This paper proposes a novel approach to analyze the impacts of cyber layer failures (i.e., protection failures and monitoring failures on the reliability evaluation of composite power systems. The reliability and availability of the cyber layer and its protection and monitoring functions with various topologies are derived based on a reliability block diagram method. The availability of the physical layer components are modified via a multi-state Markov chain model, in which the component protection and monitoring strategies, as well as the cyber layer topology, are simultaneously considered. Reliability indices of composite power systems are calculated through non-sequential Monte-Carlo simulation. Case studies demonstrate that operational reliability downgrades in cyber layer function failure situations. Moreover, protection function failures have more significant impact on the downgraded reliability than monitoring function failures do, and the reliability indices are especially sensitive to the change of the cyber layer function availability in the range from 0.95 to 1.

  20. Product ion isotopologue pattern: A tool to improve the reliability of elemental composition elucidations of unknown compounds in complex matrices.

    Science.gov (United States)

    Kaufmann, A; Walker, S; Mol, G

    2016-04-15

    Elucidation of the elemental compositions of unknown compounds (e.g., in metabolomics) generally relies on the availability of accurate masses and isotopic ratios. This study focuses on the information provided by the abundance ratio within a product ion pair (monoisotopic versus the first isotopic peak) when isolating and fragmenting the first isotopic ion (first isotopic mass spectrum) of the precursor. This process relies on the capability of the quadrupole within the Q Orbitrap instrument to isolate a very narrow mass window. Selecting only the first isotopic peak (first isotopic mass spectrum) leads to the observation of a unique product ion pair. The lighter ion within such an isotopologue pair is monoisotopic, while the heavier ion contains a single carbon isotope. The observed abundance ratio is governed by the percentage of carbon atoms lost during the fragmentation and can be described by a hypergeometric distribution. The observed carbon isotopologue abundance ratio (product ion isotopologue pattern) gives reliable information regarding the percentage of carbon atoms lost in the fragmentation process. It therefore facilitates the elucidation of the involved precursor and product ions. Unlike conventional isotopic abundances, the product ion isotopologue pattern is hardly affected by isobaric interferences. Furthermore, the appearance of these pairs greatly aids in cleaning up a 'matrix-contaminated' product ion spectrum. The product ion isotopologue pattern is a valuable tool for structural elucidation. It increases confidence in results and permits structural elucidations for heavier ions. This tool is also very useful in elucidating the elemental composition of product ions. Such information is highly valued in the field of multi-residue analysis, where the accurate mass of product ions is required for the confirmation process. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Reliability based impact localization in composite panels using Bayesian updating and the Kalman filter

    Science.gov (United States)

    Morse, Llewellyn; Sharif Khodaei, Zahra; Aliabadi, M. H.

    2018-01-01

    In this work, a reliability based impact detection strategy for a sensorized composite structure is proposed. Impacts are localized using Artificial Neural Networks (ANNs) with recorded guided waves due to impacts used as inputs. To account for variability in the recorded data under operational conditions, Bayesian updating and Kalman filter techniques are applied to improve the reliability of the detection algorithm. The possibility of having one or more faulty sensors is considered, and a decision fusion algorithm based on sub-networks of sensors is proposed to improve the application of the methodology to real structures. A strategy for reliably categorizing impacts into high energy impacts, which are probable to cause damage in the structure (true impacts), and low energy non-damaging impacts (false impacts), has also been proposed to reduce the false alarm rate. The proposed strategy involves employing classification ANNs with different features extracted from captured signals used as inputs. The proposed methodologies are validated by experimental results on a quasi-isotropic composite coupon impacted with a range of impact energies.

  2. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  3. Exact combinatorial reliability analysis of dynamic systems with sequence-dependent failures

    International Nuclear Information System (INIS)

    Xing Liudong; Shrestha, Akhilesh; Dai Yuanshun

    2011-01-01

    Many real-life fault-tolerant systems are subjected to sequence-dependent failure behavior, in which the order in which the fault events occur is important to the system reliability. Such systems can be modeled by dynamic fault trees (DFT) with priority-AND (pAND) gates. Existing approaches for the reliability analysis of systems subjected to sequence-dependent failures are typically state-space-based, simulation-based or inclusion-exclusion-based methods. Those methods either suffer from the state-space explosion problem or require long computation time especially when results with high degree of accuracy are desired. In this paper, an analytical method based on sequential binary decision diagrams is proposed. The proposed approach can analyze the exact reliability of non-repairable dynamic systems subjected to the sequence-dependent failure behavior. Also, the proposed approach is combinatorial and is applicable for analyzing systems with any arbitrary component time-to-failure distributions. The application and advantages of the proposed approach are illustrated through analysis of several examples. - Highlights: → We analyze the sequence-dependent failure behavior using combinatorial models. → The method has no limitation on the type of time-to-failure distributions. → The method is analytical and based on sequential binary decision diagrams (SBDD). → The method is computationally more efficient than existing methods.

  4. A penalty guided stochastic fractal search approach for system reliability optimization

    International Nuclear Information System (INIS)

    Mellal, Mohamed Arezki; Zio, Enrico

    2016-01-01

    Modern industry requires components and systems with high reliability levels. In this paper, we address the system reliability optimization problem. A penalty guided stochastic fractal search approach is developed for solving reliability allocation, redundancy allocation, and reliability–redundancy allocation problems. Numerical results of ten case studies are presented as benchmark problems for highlighting the superiority of the proposed approach compared to others from literature. - Highlights: • System reliability optimization is investigated. • A penalty guided stochastic fractal search approach is developed. • Results of ten case studies are compared with previously published methods. • Performance of the approach is demonstrated.

  5. Evaluating the Applicability of Data-Driven Dietary Patterns to Independent Samples with a Focus on Measurement Tools for Pattern Similarity.

    Science.gov (United States)

    Castelló, Adela; Buijsse, Brian; Martín, Miguel; Ruiz, Amparo; Casas, Ana M; Baena-Cañada, Jose M; Pastor-Barriuso, Roberto; Antolín, Silvia; Ramos, Manuel; Muñoz, Monserrat; Lluch, Ana; de Juan-Ferré, Ana; Jara, Carlos; Lope, Virginia; Jimeno, María A; Arriola-Arellano, Esperanza; Díaz, Elena; Guillem, Vicente; Carrasco, Eva; Pérez-Gómez, Beatriz; Vioque, Jesús; Pollán, Marina

    2016-12-01

    Diet is a key modifiable risk for many chronic diseases, but it remains unclear whether dietary patterns from one study sample are generalizable to other independent populations. The primary objective of this study was to assess whether data-driven dietary patterns from one study sample are applicable to other populations. The secondary objective was to assess the validity of two criteria of pattern similarity. Six dietary patterns-Western (n=3), Mediterranean, Prudent, and Healthy- from three published studies on breast cancer were reconstructed in a case-control study of 973 breast cancer patients and 973 controls. Three more internal patterns (Western, Prudent, and Mediterranean) were derived from this case-control study's own data. Applicability was assessed by comparing the six reconstructed patterns with the three internal dietary patterns, using the congruence coefficient (CC) between pattern loadings. In cases where any pair met either of two commonly used criteria for declaring patterns similar (CC ≥0.85 or a statistically significant [Pdietary patterns was double-checked by comparing their associations to risk for breast cancer, to assess whether those two criteria of similarity are actually reliable. Five of the six reconstructed dietary patterns showed high congruence (CC >0.9) to their corresponding dietary pattern derived from the case-control study's data. Similar associations with risk for breast cancer were found in all pairs of dietary patterns that had high CC but not in all pairs of dietary patterns with statistically significant correlations. Similar dietary patterns can be found in independent samples. The P value of a correlation coefficient is less reliable than the CC as a criterion for declaring two dietary patterns similar. This study shows that diet scores based on a particular study are generalizable to other populations. Copyright © 2016 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  6. Reliability Analysis of a Steel Frame

    Directory of Open Access Journals (Sweden)

    M. Sýkora

    2002-01-01

    Full Text Available A steel frame with haunches is designed according to Eurocodes. The frame is exposed to self-weight, snow, and wind actions. Lateral-torsional buckling appears to represent the most critical criterion, which is considered as a basis for the limit state function. In the reliability analysis, the probabilistic models proposed by the Joint Committee for Structural Safety (JCSS are used for basic variables. The uncertainty model coefficients take into account the inaccuracy of the resistance model for the haunched girder and the inaccuracy of the action effect model. The time invariant reliability analysis is based on Turkstra's rule for combinations of snow and wind actions. The time variant analysis describes snow and wind actions by jump processes with intermittencies. Assuming a 50-year lifetime, the obtained values of the reliability index b vary within the range from 3.95 up to 5.56. The cross-profile IPE 330 designed according to Eurocodes seems to be adequate. It appears that the time invariant reliability analysis based on Turkstra's rule provides considerably lower values of b than those obtained by the time variant analysis.

  7. Pattern transitions of oil-water two-phase flow with low water content in rectangular horizontal pipes probed by terahertz spectrum.

    Science.gov (United States)

    Feng, Xin; Wu, Shi-Xiang; Zhao, Kun; Wang, Wei; Zhan, Hong-Lei; Jiang, Chen; Xiao, Li-Zhi; Chen, Shao-Hua

    2015-11-30

    The flow-pattern transition has been a challenging problem in two-phase flow system. We propose the terahertz time-domain spectroscopy (THz-TDS) to investigate the behavior underlying oil-water flow in rectangular horizontal pipes. The low water content (0.03-2.3%) in oil-water flow can be measured accurately and reliably from the relationship between THz peak amplitude and water volume fraction. In addition, we obtain the flow pattern transition boundaries in terms of flow rates. The critical flow rate Qc of the flow pattern transitions decreases from 0.32 m3 h to 0.18 m3 h when the corresponding water content increases from 0.03% to 2.3%. These properties render THz-TDS particularly powerful technology for investigating a horizontal oil-water two-phase flow system.

  8. Reliability of Wind Turbine Components-Solder Elements Fatigue Failure

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    on the temperature mean and temperature range. Constant terms and model errors are estimated. The proposed methods are useful to predict damage values for solder joint in power electrical components. Based on the proposed methods it is described how to find the damage level for a given temperature loading profile....... The proposed methods are discussed for application in reliability assessment of Wind Turbine’s electrical components considering physical, model and measurement uncertainties. For further research it is proposed to evaluate damage criteria for electrical components due to the operational temperature...

  9. Stochastic models in reliability and maintenance

    CERN Document Server

    2002-01-01

    Our daily lives can be maintained by the high-technology systems. Computer systems are typical examples of such systems. We can enjoy our modern lives by using many computer systems. Much more importantly, we have to maintain such systems without failure, but cannot predict when such systems will fail and how to fix such systems without delay. A stochastic process is a set of outcomes of a random experiment indexed by time, and is one of the key tools needed to analyze the future behavior quantitatively. Reliability and maintainability technologies are of great interest and importance to the maintenance of such systems. Many mathematical models have been and will be proposed to describe reliability and maintainability systems by using the stochastic processes. The theme of this book is "Stochastic Models in Reliability and Main­ tainability. " This book consists of 12 chapters on the theme above from the different viewpoints of stochastic modeling. Chapter 1 is devoted to "Renewal Processes," under which cla...

  10. Reliability-redundancy optimization by means of a chaotic differential evolution approach

    International Nuclear Information System (INIS)

    Coelho, Leandro dos Santos

    2009-01-01

    The reliability design is related to the performance analysis of many engineering systems. The reliability-redundancy optimization problems involve selection of components with multiple choices and redundancy levels that produce maximum benefits, can be subject to the cost, weight, and volume constraints. Classical mathematical methods have failed in handling nonconvexities and nonsmoothness in optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solution in reliability-redundancy optimization problems. Evolutionary algorithms (EAs) - paradigms of evolutionary computation field - are stochastic and robust meta-heuristics useful to solve reliability-redundancy optimization problems. EAs such as genetic algorithm, evolutionary programming, evolution strategies and differential evolution are being used to find global or near global optimal solution. A differential evolution approach based on chaotic sequences using Lozi's map for reliability-redundancy optimization problems is proposed in this paper. The proposed method has a fast convergence rate but also maintains the diversity of the population so as to escape from local optima. An application example in reliability-redundancy optimization based on the overspeed protection system of a gas turbine is given to show its usefulness and efficiency. Simulation results show that the application of deterministic chaotic sequences instead of random sequences is a possible strategy to improve the performance of differential evolution.

  11. Reliability assessment of complex electromechanical systems under epistemic uncertainty

    International Nuclear Information System (INIS)

    Mi, Jinhua; Li, Yan-Feng; Yang, Yuan-Jian; Peng, Weiwen; Huang, Hong-Zhong

    2016-01-01

    The appearance of macro-engineering and mega-project have led to the increasing complexity of modern electromechanical systems (EMSs). The complexity of the system structure and failure mechanism makes it more difficult for reliability assessment of these systems. Uncertainty, dynamic and nonlinearity characteristics always exist in engineering systems due to the complexity introduced by the changing environments, lack of data and random interference. This paper presents a comprehensive study on the reliability assessment of complex systems. In view of the dynamic characteristics within the system, it makes use of the advantages of the dynamic fault tree (DFT) for characterizing system behaviors. The lifetime of system units can be expressed as bounded closed intervals by incorporating field failures, test data and design expertize. Then the coefficient of variation (COV) method is employed to estimate the parameters of life distributions. An extended probability-box (P-Box) is proposed to convey the present of epistemic uncertainty induced by the incomplete information about the data. By mapping the DFT into an equivalent Bayesian network (BN), relevant reliability parameters and indexes have been calculated. Furthermore, the Monte Carlo (MC) simulation method is utilized to compute the DFT model with consideration of system replacement policy. The results show that this integrated approach is more flexible and effective for assessing the reliability of complex dynamic systems. - Highlights: • A comprehensive study on the reliability assessment of complex system is presented. • An extended probability-box is proposed to convey the present of epistemic uncertainty. • The dynamic fault tree model is built. • Bayesian network and Monte Carlo simulation methods are used. • The reliability assessment of a complex electromechanical system is performed.

  12. Detection of Upscale-Crop and Partial Manipulation in Surveillance Video Based on Sensor Pattern Noise

    Science.gov (United States)

    Hyun, Dai-Kyung; Ryu, Seung-Jin; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-01-01

    In many court cases, surveillance videos are used as significant court evidence. As these surveillance videos can easily be forged, it may cause serious social issues, such as convicting an innocent person. Nevertheless, there is little research being done on forgery of surveillance videos. This paper proposes a forensic technique to detect forgeries of surveillance video based on sensor pattern noise (SPN). We exploit the scaling invariance of the minimum average correlation energy Mellin radial harmonic (MACE-MRH) correlation filter to reliably unveil traces of upscaling in videos. By excluding the high-frequency components of the investigated video and adaptively choosing the size of the local search window, the proposed method effectively localizes partially manipulated regions. Empirical evidence from a large database of test videos, including RGB (Red, Green, Blue)/infrared video, dynamic-/static-scene video and compressed video, indicates the superior performance of the proposed method. PMID:24051524

  13. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  14. Mining association patterns of drug-interactions using post marketing FDA's spontaneous reporting data.

    Science.gov (United States)

    Ibrahim, Heba; Saad, Amr; Abdo, Amany; Sharaf Eldin, A

    2016-04-01

    Pharmacovigilance (PhV) is an important clinical activity with strong implications for population health and clinical research. The main goal of PhV is the timely detection of adverse drug events (ADEs) that are novel in their clinical nature, severity and/or frequency. Drug interactions (DI) pose an important problem in the development of new drugs and post marketing PhV that contribute to 6-30% of all unexpected ADEs. Therefore, the early detection of DI is vital. Spontaneous reporting systems (SRS) have served as the core data collection system for post marketing PhV since the 1960s. The main objective of our study was to particularly identify signals of DI from SRS. In addition, we are presenting an optimized tailored mining algorithm called "hybrid Apriori". The proposed algorithm is based on an optimized and modified association rule mining (ARM) approach. A hybrid Apriori algorithm has been applied to the SRS of the United States Food and Drug Administration's (U.S. FDA) adverse events reporting system (FAERS) in order to extract significant association patterns of drug interaction-adverse event (DIAE). We have assessed the resulting DIAEs qualitatively and quantitatively using two different triage features: a three-element taxonomy and three performance metrics. These features were applied on two random samples of 100 interacting and 100 non-interacting DIAE patterns. Additionally, we have employed logistic regression (LR) statistic method to quantify the magnitude and direction of interactions in order to test for confounding by co-medication in unknown interacting DIAE patterns. Hybrid Apriori extracted 2933 interacting DIAE patterns (including 1256 serious ones) and 530 non-interacting DIAE patterns. Referring to the current knowledge using four different reliable resources of DI, the results showed that the proposed method can extract signals of serious interacting DIAEs. Various association patterns could be identified based on the relationships among

  15. Product reliability and thin-film photovoltaics

    Science.gov (United States)

    Gaston, Ryan; Feist, Rebekah; Yeung, Simon; Hus, Mike; Bernius, Mark; Langlois, Marc; Bury, Scott; Granata, Jennifer; Quintana, Michael; Carlson, Carl; Sarakakis, Georgios; Ogden, Douglas; Mettas, Adamantios

    2009-08-01

    Despite significant growth in photovoltaics (PV) over the last few years, only approximately 1.07 billion kWhr of electricity is estimated to have been generated from PV in the US during 2008, or 0.27% of total electrical generation. PV market penetration is set for a paradigm shift, as fluctuating hydrocarbon prices and an acknowledgement of the environmental impacts associated with their use, combined with breakthrough new PV technologies, such as thin-film and BIPV, are driving the cost of energy generated with PV to parity or cost advantage versus more traditional forms of energy generation. In addition to reaching cost parity with grid supplied power, a key to the long-term success of PV as a viable energy alternative is the reliability of systems in the field. New technologies may or may not have the same failure modes as previous technologies. Reliability testing and product lifetime issues continue to be one of the key bottlenecks in the rapid commercialization of PV technologies today. In this paper, we highlight the critical need for moving away from relying on traditional qualification and safety tests as a measure of reliability and focus instead on designing for reliability and its integration into the product development process. A drive towards quantitative predictive accelerated testing is emphasized and an industrial collaboration model addressing reliability challenges is proposed.

  16. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  17. Performance of intraclass correlation coefficient (ICC) as a reliability index under various distributions in scale reliability studies.

    Science.gov (United States)

    Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary

    2018-04-29

    Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Secure Cooperative Spectrum Sensing for the Cognitive Radio Network Using Nonuniform Reliability

    Directory of Open Access Journals (Sweden)

    Muhammad Usman

    2014-01-01

    Full Text Available Both reliable detection of the primary signal in a noisy and fading environment and nullifying the effect of unauthorized users are important tasks in cognitive radio networks. To address these issues, we consider a cooperative spectrum sensing approach where each user is assigned nonuniform reliability based on the sensing performance. Users with poor channel or faulty sensor are assigned low reliability. The nonuniform reliabilities serve as identification tags and are used to isolate users with malicious behavior. We consider a link layer attack similar to the Byzantine attack, which falsifies the spectrum sensing data. Three different strategies are presented in this paper to ignore unreliable and malicious users in the network. Considering only reliable users for global decision improves sensing time and decreases collisions in the control channel. The fusion center uses the degree of reliability as a weighting factor to determine the global decision in scheme I. Schemes II and III consider the unreliability of users, which makes the computations even simpler. The proposed schemes reduce the number of sensing reports and increase the inference accuracy. The advantages of our proposed schemes over conventional cooperative spectrum sensing and the Chair-Varshney optimum rule are demonstrated through simulations.

  19. A technical survey on issues of the quantitative evaluation of software reliability

    International Nuclear Information System (INIS)

    Park, J. K; Sung, T. Y.; Eom, H. S.; Jeong, H. S.; Park, J. H.; Kang, H. G.; Lee, K. Y.; Park, J. K.

    2000-04-01

    To develop the methodology for evaluating the software reliability included in digital instrumentation and control system (I and C), many kinds of methodologies/techniques that have been proposed from the software reliability engineering fuel are analyzed to identify the strong and week points of them. According to analysis results, methodologies/techniques that can be directly applied for the evaluation of the software reliability are not exist. Thus additional researches to combine the most appropriate methodologies/techniques from existing ones would be needed to evaluate the software reliability. (author)

  20. Design Pattern Canvas: An Introduction to Unified Serious Game Design Patterns

    Directory of Open Access Journals (Sweden)

    Gregor Zavcer

    2014-10-01

    Full Text Available The aim of this article is to start a dialogue and search for a unified game design tool within the game design and research community. As a possible direction, presented paper outlines the practice and importance of design pattern use in serious game development and argues that design patterns can make serious game development more efficient by enabling knowledge exchange and better communication between different stakeholders. Furthermore, the use of design patterns provides a foundation for structured research and analysis of games. In order to help advance the state of game design the paper proposes a new method – the Serious Games Design Pattern Canvas or shorter Design Pattern Canvas (DPC. DPC is a design template for developing new or documenting existing (serious game design patterns. It is a visual chart with elements describing a pattern's purpose, mechanic, audience, consequences, collected data, related research and ethical considerations. It assists game designer in aligning their activities by illustrating patterns characteristics and potential trade-offs. One of the goals of the DPC is to either help break larger game design problems into smaller pieces or assist in a bottom up approach of designing serious games. It is important to note, that the paper proposes the first step for co-creation of a game design tool and further research and validation of the DPC is needed.

  1. Reliability-based assessment of polyethylene pipe creep lifetime

    International Nuclear Information System (INIS)

    Khelif, Rabia; Chateauneuf, Alaa; Chaoui, Kamel

    2007-01-01

    Lifetime management of underground pipelines is mandatory for safe hydrocarbon transmission and distribution systems. The use of high-density polyethylene tubes subjected to internal pressure, external loading and environmental variations requires a reliability study in order to define the service limits and the optimal operating conditions. In service, the time-dependent phenomena, especially creep, take place during the pipe lifetime, leading to significant strength reduction. In this work, the reliability-based assessment of pipe lifetime models is carried out, in order to propose a probabilistic methodology for lifetime model selection and to determine the pipe safety levels as well as the most important parameters for pipeline reliability. This study is enhanced by parametric analysis on pipe configuration, gas pressure and operating temperature

  2. Reliability-based assessment of polyethylene pipe creep lifetime

    Energy Technology Data Exchange (ETDEWEB)

    Khelif, Rabia [LaMI-UBP and IFMA, Campus de Clermont-Fd, Les Cezeaux, BP 265, 63175 Aubiere Cedex (France); LR3MI, Departement de Genie Mecanique, Universite Badji Mokhtar, BP 12, Annaba 23000 (Algeria)], E-mail: rabia.khelif@ifma.fr; Chateauneuf, Alaa [LGC-University Blaise Pascal, Campus des Cezeaux, BP 206, 63174 Aubiere Cedex (France)], E-mail: alaa.chateauneuf@polytech.univ-bpclermont.fr; Chaoui, Kamel [LR3MI, Departement de Genie Mecanique, Universite Badji Mokhtar, BP 12, Annaba 23000 (Algeria)], E-mail: chaoui@univ-annaba.org

    2007-12-15

    Lifetime management of underground pipelines is mandatory for safe hydrocarbon transmission and distribution systems. The use of high-density polyethylene tubes subjected to internal pressure, external loading and environmental variations requires a reliability study in order to define the service limits and the optimal operating conditions. In service, the time-dependent phenomena, especially creep, take place during the pipe lifetime, leading to significant strength reduction. In this work, the reliability-based assessment of pipe lifetime models is carried out, in order to propose a probabilistic methodology for lifetime model selection and to determine the pipe safety levels as well as the most important parameters for pipeline reliability. This study is enhanced by parametric analysis on pipe configuration, gas pressure and operating temperature.

  3. Research on the Reliability Analysis of the Integrated Modular Avionics System Based on the AADL Error Model

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2018-01-01

    Full Text Available In recent years, the integrated modular avionics (IMA concept has been introduced to replace the traditional federated avionics. Different avionics functions are hosted in a shared IMA platform, and IMA adopts partition technologies to provide a logical isolation among different functions. The IMA architecture can provide more sophisticated and powerful avionics functionality; meanwhile, the failure propagation patterns in IMA are more complex. The feature of resource sharing introduces some unintended interconnections among different functions, which makes the failure propagation modes more complex. Therefore, this paper proposes an architecture analysis and design language- (AADL- based method to establish the reliability model of IMA platform. The single software and hardware error behavior in IMA system is modeled. The corresponding AADL error model of failure propagation among components, between software and hardware, is given. Finally, the display function of IMA platform is taken as an example to illustrate the effectiveness of the proposed method.

  4. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    Science.gov (United States)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  5. Evaluation of nodal reliability risk in a deregulated power system with photovoltaic power penetration

    DEFF Research Database (Denmark)

    Zhao, Qian; Wang, Peng; Goel, Lalit

    2014-01-01

    Owing to the intermittent characteristic of solar radiation, power system reliability may be affected with high photovoltaic (PV) power penetration. To reduce large variation of PV power, additional system balancing reserve would be needed. In deregulated power systems, deployment of reserves...... and customer reliability requirements are correlated with energy and reserve prices. Therefore a new method should be developed to evaluate the impacts of PV power on customer reliability and system reserve deployment in the new environment. In this study, a method based on the pseudo-sequential Monte Carlo...... simulation technique has been proposed to evaluate the reserve deployment and customers' nodal reliability with high PV power penetration. The proposed method can effectively model the chronological aspects and stochastic characteristics of PV power and system operation with high computation efficiency...

  6. Reliability analysis of component-level redundant topologies for solid-state fault current limiter

    Science.gov (United States)

    Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam

    2018-04-01

    Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.

  7. New secure bilateral transaction determination and study of pattern under contingencies and UPFC in competitive hybrid electricity markets

    International Nuclear Information System (INIS)

    Kumar, A.; Chanana, S.

    2009-01-01

    In the competitive electricity environment, the flexibility of power transactions is expected to drastically increase among the trading partners and can compromise the system security and reliability. These transactions are to be evaluated ahead of their scheduling in a day-ahead and hour-ahead market to avoid congestion and ensure their feasibility with respect to the system operating conditions. The security of the transactions has become essential in the new environment for better planning and management of competitive electricity markets. This paper proposes a new method of secure bilateral transaction determination using AC distribution factors based on the full Jacobian sensitivity and considering the impact of slack bus for pool and bilateral coordinated markets. The secure bilateral transactions have also been determined considering critical line outage contingencies cases. The bilateral transaction matrix pattern has also been determined in the presence of unified power flow controller (UPFC). The optimal location of UPFC has been determined using mixed integer non-linear programming approach. The proposed technique has been applied on IEEE 24-bus reliability test system (RTS). (author)

  8. Neural substrates of reliability-weighted visual-tactile multisensory integration

    Directory of Open Access Journals (Sweden)

    Michael S Beauchamp

    2010-06-01

    Full Text Available As sensory systems deteriorate in aging or disease, the brain must relearn the appropriate weights to assign each modality during multisensory integration. Using blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI of human subjects, we tested a model for the neural mechanisms of sensory weighting, termed “weighted connections”. This model holds that the connection weights between early and late areas vary depending on the reliability of the modality, independent of the level of early sensory cortex activity. When subjects detected viewed and felt touches to the hand, a network of brain areas was active, including visual areas in lateral occipital cortex, somatosensory areas in inferior parietal lobe, and multisensory areas in the intraparietal sulcus (IPS. In agreement with the weighted connection model, the connection weight measured with structural equation modeling between somatosensory cortex and IPS increased for somatosensory-reliable stimuli, and the connection weight between visual cortex and IPS increased for visual-reliable stimuli. This double dissociation of connection strengths was similar to the pattern of behavioral responses during incongruent multisensory stimulation, suggesting that weighted connections may be a neural mechanism for behavioral reliability weighting.for behavioral reliability weighting.

  9. Evaluation of Smart Grid Technologies Employed for System Reliability Improvement: Pacific Northwest Smart Grid Demonstration Experience

    Energy Technology Data Exchange (ETDEWEB)

    Agalgaonkar, Yashodhan P.; Hammerstrom, Donald J.

    2017-06-01

    The Pacific Northwest Smart Grid Demonstration (PNWSGD) was a smart grid technology performance evaluation project that included multiple U.S. states and cooperation from multiple electric utilities in the northwest region. One of the local objectives for the project was to achieve improved distribution system reliability. Toward this end, some PNWSGD utilities automated their distribution systems, including the application of fault detection, isolation, and restoration and advanced metering infrastructure. In light of this investment, a major challenge was to establish a correlation between implementation of these smart grid technologies and actual improvements of distribution system reliability. This paper proposes using Welch’s t-test to objectively determine and quantify whether distribution system reliability is improving over time. The proposed methodology is generic, and it can be implemented by any utility after calculation of the standard reliability indices. The effectiveness of the proposed hypothesis testing approach is demonstrated through comprehensive practical results. It is believed that wider adoption of the proposed approach can help utilities to evaluate a realistic long-term performance of smart grid technologies.

  10. Research on Connection and Function Reliability of the Oil&Gas Pipeline System

    Directory of Open Access Journals (Sweden)

    Xu Bo

    2017-01-01

    Full Text Available Pipeline transportation is the optimal way for energy delivery in terms of safety, efficiency and environmental protection. Because of the complexity of pipeline external system including geological hazards, social and cultural influence, it is a great challenge to operate the pipeline safely and reliable. Therefore, the pipeline reliability becomes an important issue. Based on the classical reliability theory, the analysis of pipeline system is carried out, then the reliability model of the pipeline system is built, and the calculation is addressed thereafter. Further the connection and function reliability model is applied to a practical active pipeline system, with the use of the proposed methodology of the pipeline system; the connection reliability and function reliability are obtained. This paper firstly presented to considerate the connection and function reliability separately and obtain significant contribution to establish the mathematical reliability model of pipeline system, hence provide fundamental groundwork for the pipeline reliability research in the future.

  11. Information fusion in personal biometric authentication based on the iris pattern

    International Nuclear Information System (INIS)

    Wang, Fenghua; Han, Jiuqiang

    2009-01-01

    Information fusion in biometrics has received considerable attention. This paper focuses on the application of information fusion techniques in iris recognition. To improve the reliability and accuracy of personal identification based on the iris pattern, this paper proposes the schemes of multialgorithmic fusion and multiinstance fusion. Multialgorithmic fusion integrates the improved phase algorithm and the DCT-based algorithm, and multiinstance fusion combines information from the left iris and the right iris of an individual. Both multialgorithmic fusion and multiinstance fusion are carried out at the matching score level and the support vector machine (SVM)-based fusion rule is utilized to generate fused scores for final decision. The experimental results on the noisy iris database UBIRIS demonstrate that the proposed fusion schemes can perform better than the single recognition systems, and further prove that information fusion techniques are feasible and effective to improve the accuracy and robustness of iris recognition especially under noisy conditions

  12. A new efficient algorithm for computing the imprecise reliability of monotone systems

    International Nuclear Information System (INIS)

    Utkin, Lev V.

    2004-01-01

    Reliability analysis of complex systems by partial information about reliability of components and by different conditions of independence of components may be carried out by means of the imprecise probability theory which provides a unified framework (natural extension, lower and upper previsions) for computing the system reliability. However, the application of imprecise probabilities to reliability analysis meets with a complexity of optimization problems which have to be solved for obtaining the system reliability measures. Therefore, an efficient simplified algorithm to solve and decompose the optimization problems is proposed in the paper. This algorithm allows us to practically implement reliability analysis of monotone systems under partial and heterogeneous information about reliability of components and under conditions of the component independence or the lack of information about independence. A numerical example illustrates the algorithm

  13. Statistical Bayesian method for reliability evaluation based on ADT data

    Science.gov (United States)

    Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong

    2018-05-01

    Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.

  14. Internal Consistency, Retest Reliability, and their Implications For Personality Scale Validity

    Science.gov (United States)

    McCrae, Robert R.; Kurtz, John E.; Yamagata, Shinji; Terracciano, Antonio

    2010-01-01

    We examined data (N = 34,108) on the differential reliability and validity of facet scales from the NEO Inventories. We evaluated the extent to which (a) psychometric properties of facet scales are generalizable across ages, cultures, and methods of measurement; and (b) validity criteria are associated with different forms of reliability. Composite estimates of facet scale stability, heritability, and cross-observer validity were broadly generalizable. Two estimates of retest reliability were independent predictors of the three validity criteria; none of three estimates of internal consistency was. Available evidence suggests the same pattern of results for other personality inventories. Internal consistency of scales can be useful as a check on data quality, but appears to be of limited utility for evaluating the potential validity of developed scales, and it should not be used as a substitute for retest reliability. Further research on the nature and determinants of retest reliability is needed. PMID:20435807

  15. Hybrid Structural Reliability Analysis under Multisource Uncertainties Based on Universal Grey Numbers

    Directory of Open Access Journals (Sweden)

    Xingfa Yang

    2018-01-01

    Full Text Available Nondeterministic parameters of certain distribution are employed to model structural uncertainties, which are usually assumed as stochastic factors. However, model parameters may not be precisely represented due to some factors in engineering practices, such as lack of sufficient data, data with fuzziness, and unknown-but-bounded conditions. To this end, interval and fuzzy parameters are implemented and an efficient approach to structural reliability analysis with random-interval-fuzzy hybrid parameters is proposed in this study. Fuzzy parameters are first converted to equivalent random ones based on the equal entropy principle. 3σ criterion is then employed to transform the equivalent random and the original random parameters to interval variables. In doing this, the hybrid reliability problem is transformed into the one only with interval variables, in other words, nonprobabilistic reliability analysis problem. Nevertheless, the problem of interval extension existed in interval arithmetic, especially for the nonlinear systems. Therefore, universal grey mathematics, which can tackle the issue of interval extension, is employed to solve the nonprobabilistic reliability analysis problem. The results show that the proposed method can obtain more conservative results of the hybrid structural reliability.

  16. Sensitivity analysis of smart grids reliability due to indirect cyber-power interdependencies under various DG technologies, DG penetrations, and operation times

    International Nuclear Information System (INIS)

    Hashemi-Dezaki, Hamed; Agah, Seyed Mohammad Mousavi; Askarian-Abyaneh, Hossein; Haeri-Khiavi, Homayoun

    2016-01-01

    Highlights: • A novel risk assessment method considering the ICPIs is proposed. • The protection and monitoring system as the ICPIs applications are studied. • The uncertainty of results is analyzed in addition to expected average results. • ICPIs impacts due to DG penetrations under various DG technologies are analyzed. • The well-being criteria have been provided in addition to reliability indices. - Abstract: The cyber failures such as failures in protection and monitoring systems will not stop the operation or change the behavior of the power system instantly but will adversely affect the performance of the power system against the potential failure. Such indirect cyber-power interdependencies (ICPIs) may either intensify the probability of future failures or postpone the repercussion to the present failure of the power elements. The much less effort has been devoted in literature to investigate the ICPIs impacts, particularly in stochastic simulating space. In this paper, a novel stochastic-based reliability evaluation method which considers the ICPIs impacts under various uncertain parameters is proposed. The consideration of uncertainty regarding the renewable distributed generation (DG) units, consumption patterns, power and cyber elements, and ICPIs is one of the most important contributions of the proposed method. Further, a novel stochastic-based state upgrading is introduced to concern the ICPIs of protection and monitoring systems. By using the proposed state upgrading methodology, it is possible to evaluate the reliability of smart grid based on ICPIs by using conventional reliability evaluation methods. The proposed risk assessment methodology is applied to an actual distribution grid. The several sensitivity studies are performed to gain insight into how the penetration level of DG units under various DG technology scenarios can affect the ICPIs impacts on the risk level of smart grid. The test results show that regardless of the DG

  17. Flexural Capability of Patterned Transparent Conductive Substrate by Performing Electrical Measurements and Stress Simulations

    Directory of Open Access Journals (Sweden)

    Chang-Chun Lee

    2016-10-01

    Full Text Available The suitability of stacked thin films for next-generation display technology was analyzed based on their properties and geometrical designs to evaluate the mechanical reliability of transparent conducting thin films utilized in flexural displays. In general, the high bending stress induced by various operation conditions is a major concern regarding the mechanical reliability of indium–tin–oxide (ITO films deposited on polyethylene terephthalate (PET substrates; mechanical reliability is commonly used to estimate the flexibility of displays. However, the pattern effect is rarely investigated to estimate the mechanical reliability of ITO/PET films. Thus, this study examined the flexible content of patterned ITO/PET films with two different line widths by conducting bending tests and sheet resistance measurements. Moreover, a stress–strain simulation enabled by finite element analysis was performed on the patterned ITO/PET to explore the stress impact of stacked film structures under various levels of flexural load. Results show that the design of the ITO/PET film can be applied in developing mechanically reliable flexible electronics.

  18. The effect of loss functions on empirical Bayes reliability analysis

    Directory of Open Access Journals (Sweden)

    Camara Vincent A. R.

    1998-01-01

    Full Text Available The aim of the present study is to investigate the sensitivity of empirical Bayes estimates of the reliability function with respect to changing of the loss function. In addition to applying some of the basic analytical results on empirical Bayes reliability obtained with the use of the “popular” squared error loss function, we shall derive some expressions corresponding to empirical Bayes reliability estimates obtained with the Higgins–Tsokos, the Harris and our proposed logarithmic loss functions. The concept of efficiency, along with the notion of integrated mean square error, will be used as a criterion to numerically compare our results. It is shown that empirical Bayes reliability functions are in general sensitive to the choice of the loss function, and that the squared error loss does not always yield the best empirical Bayes reliability estimate.

  19. Reliability and validity of two isometric squat tests.

    Science.gov (United States)

    Blazevich, Anthony J; Gill, Nicholas; Newton, Robert U

    2002-05-01

    The purpose of the present study was first to examine the reliability of isometric squat (IS) and isometric forward hack squat (IFHS) tests to determine if repeated measures on the same subjects yielded reliable results. The second purpose was to examine the relation between isometric and dynamic measures of strength to assess validity. Fourteen male subjects performed maximal IS and IFHS tests on 2 occasions and 1 repetition maximum (1-RM) free-weight squat and forward hack squat (FHS) tests on 1 occasion. The 2 tests were found to be highly reliable (intraclass correlation coefficient [ICC](IS) = 0.97 and ICC(IFHS) = 1.00). There was a strong relation between average IS and 1-RM squat performance, and between IFHS and 1-RM FHS performance (r(squat) = 0.77, r(FHS) = 0.76; p squat and FHS test performances (r squat and FHS test performance can be attributed to differences in the movement patterns of the tests

  20. Reliability allocation problem in a series-parallel system

    International Nuclear Information System (INIS)

    Yalaoui, Alice; Chu, Chengbin; Chatelet, Eric

    2005-01-01

    In order to improve system reliability, designers may introduce in a system different technologies in parallel. When each technology is composed of components in series, the configuration belongs to the series-parallel systems. This type of system has not been studied as much as the parallel-series architecture. There exist no methods dedicated to the reliability allocation in series-parallel systems with different technologies. We propose in this paper theoretical and practical results for the allocation problem in a series-parallel system. Two resolution approaches are developed. Firstly, a one stage problem is studied and the results are exploited for the multi-stages problem. A theoretical condition for obtaining the optimal allocation is developed. Since this condition is too restrictive, we secondly propose an alternative approach based on an approximated function and the results of the one-stage study. This second approach is applied to numerical examples

  1. A Simple and Reliable Method of Design for Standalone Photovoltaic Systems

    Science.gov (United States)

    Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.

    2017-06-01

    Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.

  2. Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications

    Science.gov (United States)

    Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco

    2012-01-01

    Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497

  3. Seeking high reliability in primary care: Leadership, tools, and organization.

    Science.gov (United States)

    Weaver, Robert R

    2015-01-01

    Leaders in health care increasingly recognize that improving health care quality and safety requires developing an organizational culture that fosters high reliability and continuous process improvement. For various reasons, a reliability-seeking culture is lacking in most health care settings. Developing a reliability-seeking culture requires leaders' sustained commitment to reliability principles using key mechanisms to embed those principles widely in the organization. The aim of this study was to examine how key mechanisms used by a primary care practice (PCP) might foster a reliability-seeking, system-oriented organizational culture. A case study approach was used to investigate the PCP's reliability culture. The study examined four cultural artifacts used to embed reliability-seeking principles across the organization: leadership statements, decision support tools, and two organizational processes. To decipher their effects on reliability, the study relied on observations of work patterns and the tools' use, interactions during morning huddles and process improvement meetings, interviews with clinical and office staff, and a "collective mindfulness" questionnaire. The five reliability principles framed the data analysis. Leadership statements articulated principles that oriented the PCP toward a reliability-seeking culture of care. Reliability principles became embedded in the everyday discourse and actions through the use of "problem knowledge coupler" decision support tools and daily "huddles." Practitioners and staff were encouraged to report unexpected events or close calls that arose and which often initiated a formal "process change" used to adjust routines and prevent adverse events from recurring. Activities that foster reliable patient care became part of the taken-for-granted routine at the PCP. The analysis illustrates the role leadership, tools, and organizational processes play in developing and embedding a reliable-seeking culture across an

  4. Data reliability in complex directed networks

    Science.gov (United States)

    Sanz, Joaquín; Cozzo, Emanuele; Moreno, Yamir

    2013-12-01

    The availability of data from many different sources and fields of science has made it possible to map out an increasing number of networks of contacts and interactions. However, quantifying how reliable these data are remains an open problem. From Biology to Sociology and Economics, the identification of false and missing positives has become a problem that calls for a solution. In this work we extend one of the newest, best performing models—due to Guimerá and Sales-Pardo in 2009—to directed networks. The new methodology is able to identify missing and spurious directed interactions with more precision than previous approaches, which renders it particularly useful for analyzing data reliability in systems like trophic webs, gene regulatory networks, communication patterns and several social systems. We also show, using real-world networks, how the method can be employed to help search for new interactions in an efficient way.

  5. Data reliability in complex directed networks

    International Nuclear Information System (INIS)

    Sanz, Joaquín; Cozzo, Emanuele; Moreno, Yamir

    2013-01-01

    The availability of data from many different sources and fields of science has made it possible to map out an increasing number of networks of contacts and interactions. However, quantifying how reliable these data are remains an open problem. From Biology to Sociology and Economics, the identification of false and missing positives has become a problem that calls for a solution. In this work we extend one of the newest, best performing models—due to Guimerá and Sales-Pardo in 2009—to directed networks. The new methodology is able to identify missing and spurious directed interactions with more precision than previous approaches, which renders it particularly useful for analyzing data reliability in systems like trophic webs, gene regulatory networks, communication patterns and several social systems. We also show, using real-world networks, how the method can be employed to help search for new interactions in an efficient way. (paper)

  6. A Compact, Light-weight, Reliable and Highly Efficient Heat Pump for, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — RTI proposes to develop an efficient, reliable, compact and lightweight heat pump for space applications. The proposed effort is expected to lead to (at the end of...

  7. Reliability of lifeline networks under seismic hazard

    International Nuclear Information System (INIS)

    Selcuk, A. Sevtap; Yuecemen, M. Semih

    1999-01-01

    Lifelines, such as pipelines, transportation, communication and power transmission systems, are networks which extend spatially over large geographical regions. The quantification of the reliability (survival probability) of a lifeline under seismic threat requires attention, as the proper functioning of these systems during or after a destructive earthquake is vital. In this study, a lifeline is idealized as an equivalent network with the capacity of its elements being random and spatially correlated and a comprehensive probabilistic model for the assessment of the reliability of lifelines under earthquake loads is developed. The seismic hazard that the network is exposed to is described by a probability distribution derived by using the past earthquake occurrence data. The seismic hazard analysis is based on the 'classical' seismic hazard analysis model with some modifications. An efficient algorithm developed by Yoo and Deo (Yoo YB, Deo N. A comparison of algorithms for terminal pair reliability. IEEE Transactions on Reliability 1988; 37: 210-215) is utilized for the evaluation of the network reliability. This algorithm eliminates the CPU time and memory capacity problems for large networks. A comprehensive computer program, called LIFEPACK is coded in Fortran language in order to carry out the numerical computations. Two detailed case studies are presented to show the implementation of the proposed model

  8. On the reliability of seasonal climate forecasts

    Science.gov (United States)

    Weisheimer, A.; Palmer, T. N.

    2014-01-01

    Seasonal climate forecasts are being used increasingly across a range of application sectors. A recent UK governmental report asked: how good are seasonal forecasts on a scale of 1–5 (where 5 is very good), and how good can we expect them to be in 30 years time? Seasonal forecasts are made from ensembles of integrations of numerical models of climate. We argue that ‘goodness’ should be assessed first and foremost in terms of the probabilistic reliability of these ensemble-based forecasts; reliable inputs are essential for any forecast-based decision-making. We propose that a ‘5’ should be reserved for systems that are not only reliable overall, but where, in particular, small ensemble spread is a reliable indicator of low ensemble forecast error. We study the reliability of regional temperature and precipitation forecasts of the current operational seasonal forecast system of the European Centre for Medium-Range Weather Forecasts, universally regarded as one of the world-leading operational institutes producing seasonal climate forecasts. A wide range of ‘goodness’ rankings, depending on region and variable (with summer forecasts of rainfall over Northern Europe performing exceptionally poorly) is found. Finally, we discuss the prospects of reaching ‘5’ across all regions and variables in 30 years time. PMID:24789559

  9. Dynamic reliability networks with self-healing units

    International Nuclear Information System (INIS)

    Jenab, K.; Seyed Hosseini, S.M.; Dhillon, B.S.

    2008-01-01

    This paper presents an analytical approach for dynamic reliability networks used for the failure limit strategy in maintenance optimization. The proposed approach utilizes the moment generating function (MGF) and the flow-graph concept to depict the functional and reliability diagrams of the system comprised of series, parallel or mix configuration of self-healing units. The self-healing unit is featured by the embedded failure detection and recovery mechanisms presented by self-loop in flow-graph networks. The newly developed analytical approach provides the probability of the system failure and time-to-failure data i.e., mean and standard deviation time-to-failure used for maintenance optimization

  10. Reliability analysis of reinforced concrete grids with nonlinear material behavior

    Energy Technology Data Exchange (ETDEWEB)

    Neves, Rodrigo A [EESC-USP, Av. Trabalhador Sao Carlense, 400, 13566-590 Sao Carlos (Brazil); Chateauneuf, Alaa [LaMI-UBP and IFMA, Campus de Clermont-Fd, Les Cezeaux, BP 265, 63175 Aubiere cedex (France)]. E-mail: alaa.chateauneuf@ifma.fr; Venturini, Wilson S [EESC-USP, Av. Trabalhador Sao Carlense, 400, 13566-590 Sao Carlos (Brazil)]. E-mail: venturin@sc.usp.br; Lemaire, Maurice [LaMI-UBP and IFMA, Campus de Clermont-Fd, Les Cezeaux, BP 265, 63175 Aubiere cedex (France)

    2006-06-15

    Reinforced concrete grids are usually used to support large floor slabs. These grids are characterized by a great number of critical cross-sections, where the overall failure is usually sudden. However, nonlinear behavior of concrete leads to the redistribution of internal forces and accurate reliability assessment becomes mandatory. This paper presents a reliability study on reinforced concrete (RC) grids based on coupling Monte Carlo simulations with the response surface techniques. This approach allows us to analyze real RC grids with large number of failure components. The response surface is used to evaluate the structural safety by using first order reliability methods. The application to simple grids shows the interest of the proposed method and the role of moment redistribution in the reliability assessment.

  11. Inclusion of task dependence in human reliability analysis

    International Nuclear Information System (INIS)

    Su, Xiaoyan; Mahadevan, Sankaran; Xu, Peida; Deng, Yong

    2014-01-01

    Dependence assessment among human errors in human reliability analysis (HRA) is an important issue, which includes the evaluation of the dependence among human tasks and the effect of the dependence on the final human error probability (HEP). This paper represents a computational model to handle dependence in human reliability analysis. The aim of the study is to automatically provide conclusions on the overall degree of dependence and calculate the conditional human error probability (CHEP) once the judgments of the input factors are given. The dependence influencing factors are first identified by the experts and the priorities of these factors are also taken into consideration. Anchors and qualitative labels are provided as guidance for the HRA analyst's judgment of the input factors. The overall degree of dependence between human failure events is calculated based on the input values and the weights of the input factors. Finally, the CHEP is obtained according to a computing formula derived from the technique for human error rate prediction (THERP) method. The proposed method is able to quantify the subjective judgment from the experts and improve the transparency in the HEP evaluation process. Two examples are illustrated to show the effectiveness and the flexibility of the proposed method. - Highlights: • We propose a computational model to handle dependence in human reliability analysis. • The priorities of the dependence influencing factors are taken into consideration. • The overall dependence degree is determined by input judgments and the weights of factors. • The CHEP is obtained according to a computing formula derived from THERP

  12. Reliability-Based Decision Fusion in Multimodal Biometric Verification Systems

    Directory of Open Access Journals (Sweden)

    Kryszczuk Krzysztof

    2007-01-01

    Full Text Available We present a methodology of reliability estimation in the multimodal biometric verification scenario. Reliability estimation has shown to be an efficient and accurate way of predicting and correcting erroneous classification decisions in both unimodal (speech, face, online signature and multimodal (speech and face systems. While the initial research results indicate the high potential of the proposed methodology, the performance of the reliability estimation in a multimodal setting has not been sufficiently studied or evaluated. In this paper, we demonstrate the advantages of using the unimodal reliability information in order to perform an efficient biometric fusion of two modalities. We further show the presented method to be superior to state-of-the-art multimodal decision-level fusion schemes. The experimental evaluation presented in this paper is based on the popular benchmarking bimodal BANCA database.

  13. Reliability Equivalence to Symmetrical UHVDC Transmission Systems Considering Redundant Structure Configuration

    Directory of Open Access Journals (Sweden)

    Xing Jiang

    2018-03-01

    Full Text Available In recent years, the ultra-high voltage direct current (UHVDC transmission system has been developed rapidly for its significant long-distance, high-capacity and low-loss properties. Equipment failures and overall outages of the UHVDC system have increasingly vital influence on the power supply of the receiving end grid. To improve the reliability level of UHVDC systems, a quantitative selection and configuration approach of redundant structures is proposed in this paper, which is based on multi-state reliability equivalence. Specifically, considering the symmetry characteristic of an UHVDC system, a state space model is established as a monopole rather than a bipole, which effectively reduces the state space dimensions to be considered by deducing the reliability merging operator of two poles. Considering the standby effect of AC filters and the recovery effect of converter units, the number of available converter units and corresponding probability are expressed with in universal generating function (UGF form. Then, a sensitivity analysis is performed to quantify the impact of component reliability parameters on system reliability and determine the most specific devices that should be configured in the redundant structure. Finally, a cost-benefit analysis is utilized to help determine the optimal scheme of redundant devices. Case studies are conducted to demonstrate the effectiveness and accuracy of the proposed method. Based on the numerical results, configuring a set of redundant transformers is indicated to be of the greatest significance to improve the reliability level of UHVDC transmission systems.

  14. LIF: A new Kriging based learning function and its application to structural reliability analysis

    International Nuclear Information System (INIS)

    Sun, Zhili; Wang, Jian; Li, Rui; Tong, Cao

    2017-01-01

    The main task of structural reliability analysis is to estimate failure probability of a studied structure taking randomness of input variables into account. To consider structural behavior practically, numerical models become more and more complicated and time-consuming, which increases the difficulty of reliability analysis. Therefore, sequential strategies of design of experiment (DoE) are raised. In this research, a new learning function, named least improvement function (LIF), is proposed to update DoE of Kriging based reliability analysis method. LIF values how much the accuracy of estimated failure probability will be improved if adding a given point into DoE. It takes both statistical information provided by the Kriging model and the joint probability density function of input variables into account, which is the most important difference from the existing learning functions. Maximum point of LIF is approximately determined with Markov Chain Monte Carlo(MCMC) simulation. A new reliability analysis method is developed based on the Kriging model, in which LIF, MCMC and Monte Carlo(MC) simulation are employed. Three examples are analyzed. Results show that LIF and the new method proposed in this research are very efficient when dealing with nonlinear performance function, small probability, complicated limit state and engineering problems with high dimension. - Highlights: • Least improvement function (LIF) is proposed for structural reliability analysis. • LIF takes both Kriging based statistical information and joint PDF into account. • A reliability analysis method is constructed based on Kriging, MCS and LIF.

  15. A Closed-Form Technique for the Reliability and Risk Assessment of Wind Turbine Systems

    Directory of Open Access Journals (Sweden)

    Leonardo Dueñas-Osorio

    2012-06-01

    Full Text Available This paper proposes a closed-form method to evaluate wind turbine system reliability and associated failure consequences. Monte Carlo simulation, a widely used approach for system reliability assessment, usually requires large numbers of computational experiments, while existing analytical methods are limited to simple system event configurations with a focus on average values of reliability metrics. By analyzing a wind turbine system and its components in a combinatorial yet computationally efficient form, the proposed approach provides an entire probability distribution of system failure that contains all possible configurations of component failure and survival events. The approach is also capable of handling unique component attributes such as downtime and repair cost needed for risk estimations, and enables sensitivity analysis for quantifying the criticality of individual components to wind turbine system reliability. Applications of the technique are illustrated by assessing the reliability of a 12-subassembly turbine system. In addition, component downtimes and repair costs of components are embedded in the formulation to compute expected annual wind turbine unavailability and repair cost probabilities, and component importance metrics useful for maintenance planning and research prioritization. Furthermore, this paper introduces a recursive solution to closed-form method and applies this to a 45-component turbine system. The proposed approach proves to be computationally efficient and yields vital reliability information that could be readily used by wind farm stakeholders for decision making and risk management.

  16. Improving Metrological Reliability of Information-Measuring Systems Using Mathematical Modeling of Their Metrological Characteristics

    Science.gov (United States)

    Kurnosov, R. Yu; Chernyshova, T. I.; Chernyshov, V. N.

    2018-05-01

    The algorithms for improving the metrological reliability of analogue blocks of measuring channels and information-measuring systems are developed. The proposed algorithms ensure the optimum values of their metrological reliability indices for a given analogue circuit block solution.

  17. Enhancing product robustness in reliability-based design optimization

    International Nuclear Information System (INIS)

    Zhuang, Xiaotian; Pan, Rong; Du, Xiaoping

    2015-01-01

    Different types of uncertainties need to be addressed in a product design optimization process. In this paper, the uncertainties in both product design variables and environmental noise variables are considered. The reliability-based design optimization (RBDO) is integrated with robust product design (RPD) to concurrently reduce the production cost and the long-term operation cost, including quality loss, in the process of product design. This problem leads to a multi-objective optimization with probabilistic constraints. In addition, the model uncertainties associated with a surrogate model that is derived from numerical computation methods, such as finite element analysis, is addressed. A hierarchical experimental design approach, augmented by a sequential sampling strategy, is proposed to construct the response surface of product performance function for finding optimal design solutions. The proposed method is demonstrated through an engineering example. - Highlights: • A unifying framework for integrating RBDO and RPD is proposed. • Implicit product performance function is considered. • The design problem is solved by sequential optimization and reliability assessment. • A sequential sampling technique is developed for improving design optimization. • The comparison with traditional RBDO is provided

  18. A study of lip prints and its reliability as a forensic tool

    Science.gov (United States)

    Verma, Yogendra; Einstein, Arouquiaswamy; Gondhalekar, Rajesh; Verma, Anoop K.; George, Jiji; Chandra, Shaleen; Gupta, Shalini; Samadi, Fahad M.

    2015-01-01

    Introduction: Lip prints, like fingerprints, are unique to an individual and can be easily recorded. Therefore, we compared direct and indirect lip print patterns in males and females of different age groups, studied the inter- and intraobserver bias in recording the data, and observed any changes in the lip print patterns over a period of time, thereby, assessing the reliability of lip prints as a forensic tool. Materials and Methods: Fifty females and 50 males in the age group of 15 to 35 years were selected for the study. Lips with any deformity or scars were not included. Lip prints were registered by direct and indirect methods and transferred to a preformed registration sheet. Direct method of lip print registration was repeated after a six-month interval. All the recorded data were analyzed statistically. Results: The predominant patterns were vertical and branched. More females showed the branched pattern and males revealed an equal prevalence of vertical and reticular patterns. There was an interobserver agreement, which was 95%, and there was no change in the lip prints over time. Indirect registration of lip prints correlated with direct method prints. Conclusion: Lip prints can be used as a reliable forensic tool, considering the consistency of lip prints over time and the accurate correlation of indirect prints to direct prints. PMID:26668449

  19. Eliciting design patterns for e-learning systems

    Science.gov (United States)

    Retalis, Symeon; Georgiakakis, Petros; Dimitriadis, Yannis

    2006-06-01

    Design pattern creation, especially in the e-learning domain, is a highly complex process that has not been sufficiently studied and formalized. In this paper, we propose a systematic pattern development cycle, whose most important aspects focus on reverse engineering of existing systems in order to elicit features that are cross-validated through the use of appropriate, authentic scenarios. However, an iterative pattern process is proposed that takes advantage of multiple data sources, thus emphasizing a holistic view of the teaching learning processes. The proposed schema of pattern mining has been extensively validated for Asynchronous Network Supported Collaborative Learning (ANSCL) systems, as well as for other types of tools in a variety of scenarios, with promising results.

  20. An Energy-Efficient Link Layer Protocol for Reliable Transmission over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Iqbal Adnan

    2009-01-01

    Full Text Available In multihop wireless networks, hop-by-hop reliability is generally achieved through positive acknowledgments at the MAC layer. However, positive acknowledgments introduce significant energy inefficiencies on battery-constrained devices. This inefficiency becomes particularly significant on high error rate channels. We propose to reduce the energy consumption during retransmissions using a novel protocol that localizes bit-errors at the MAC layer. The proposed protocol, referred to as Selective Retransmission using Virtual Fragmentation (SRVF, requires simple modifications to the positive-ACK-based reliability mechanism but provides substantial improvements in energy efficiency. The main premise of the protocol is to localize bit-errors by performing partial checksums on disjoint parts or virtual fragments of a packet. In case of error, only the corrupted virtual fragments are retransmitted. We develop stochastic models of the Simple Positive-ACK-based reliability, the previously-proposed Packet Length Optimization (PLO protocol, and the SRVF protocol operating over an arbitrary-order Markov wireless channel. Our analytical models show that SRVF provides significant theoretical improvements in energy efficiency over existing protocols. We then use bit-error traces collected over different real networks to empirically compare the proposed and existing protocols. These experimental results further substantiate that SRVF provides considerably better energy efficiency than Simple Positive-ACK and Packet Length Optimization protocols.

  1. Design of piezoelectric transducer layer with electromagnetic shielding and high connection reliability

    Science.gov (United States)

    Qiu, Lei; Yuan, Shenfang; Shi, Xiaoling; Huang, Tianxiang

    2012-07-01

    Piezoelectric transducer (PZT) and Lamb wave based structural health monitoring (SHM) method have been widely studied for on-line SHM of high-performance structures. To monitor large-scale structures, a dense PZTs array is required. In order to improve the placement efficiency and reduce the wire burden of the PZTs array, the concept of the piezoelectric transducers layer (PSL) was proposed. The PSL consists of PZTs, a flexible interlayer with printed wires and signal input/output interface. For on-line SHM on real aircraft structures, there are two main issues on electromagnetic interference and connection reliability of the PSL. To address the issues, an electromagnetic shielding design method of the PSL to reduce spatial electromagnetic noise and crosstalk is proposed and a combined welding-cementation process based connection reliability design method is proposed to enhance the connection reliability between the PZTs and the flexible interlayer. Two experiments on electromagnetic interference suppression are performed to validate the shielding design of the PSL. The experimental results show that the amplitudes of the spatial electromagnetic noise and crosstalk output from the shielded PSL developed by this paper are - 15 dB and - 25 dB lower than those of the ordinary PSL, respectively. Other two experiments on temperature durability ( - 55 °C-80 °C ) and strength durability (160-1600μɛ, one million load cycles) are applied to the PSL to validate the connection reliability. The low repeatability errors (less than 3% and less than 5%, respectively) indicate that the developed PSL is of high connection reliability and long fatigue life.

  2. Methodology for reliability allocation based on fault tree analysis and dualistic contrast

    Institute of Scientific and Technical Information of China (English)

    TONG Lili; CAO Xuewu

    2008-01-01

    Reliability allocation is a difficult multi-objective optimization problem.This paper presents a methodology for reliability allocation that can be applied to determine the reliability characteristics of reactor systems or subsystems.The dualistic contrast,known as one of the most powerful tools for optimization problems,is applied to the reliability allocation model of a typical system in this article.And the fault tree analysis,deemed to be one of the effective methods of reliability analysis,is also adopted.Thus a failure rate allocation model based on the fault tree analysis and dualistic contrast is achieved.An application on the emergency diesel generator in the nuclear power plant is given to illustrate the proposed method.

  3. Conceptual Software Reliability Prediction Models for Nuclear Power Plant Safety Systems

    International Nuclear Information System (INIS)

    Johnson, G.; Lawrence, D.; Yu, H.

    2000-01-01

    The objective of this project is to develop a method to predict the potential reliability of software to be used in a digital system instrumentation and control system. The reliability prediction is to make use of existing measures of software reliability such as those described in IEEE Std 982 and 982.2. This prediction must be of sufficient accuracy to provide a value for uncertainty that could be used in a nuclear power plant probabilistic risk assessment (PRA). For the purposes of the project, reliability was defined to be the probability that the digital system will successfully perform its intended safety function (for the distribution of conditions under which it is expected to respond) upon demand with no unintended functions that might affect system safety. The ultimate objective is to use the identified measures to develop a method for predicting the potential quantitative reliability of a digital system. The reliability prediction models proposed in this report are conceptual in nature. That is, possible prediction techniques are proposed and trial models are built, but in order to become a useful tool for predicting reliability, the models must be tested, modified according to the results, and validated. Using methods outlined by this project, models could be constructed to develop reliability estimates for elements of software systems. This would require careful review and refinement of the models, development of model parameters from actual experience data or expert elicitation, and careful validation. By combining these reliability estimates (generated from the validated models for the constituent parts) in structural software models, the reliability of the software system could then be predicted. Modeling digital system reliability will also require that methods be developed for combining reliability estimates for hardware and software. System structural models must also be developed in order to predict system reliability based upon the reliability

  4. Simulation Approach to Mission Risk and Reliability Analysis, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — It is proposed to develop and demonstrate an integrated total-system risk and reliability analysis approach that is based on dynamic, probabilistic simulation. This...

  5. Distributed Information and Control system reliability enhancement by fog-computing concept application

    Science.gov (United States)

    Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya

    2018-03-01

    The paper focuses on the information and control system reliability issue. Authors of the current paper propose a new complex approach of information and control system reliability enhancement by application of the computing concept elements. The approach proposed consists of a complex of optimization problems to be solved. These problems are: estimation of computational complexity, which can be shifted to the edge of the network and fog-layer, distribution of computations among the data processing elements and distribution of computations among the sensors. The problems as well as some simulated results and discussion are formulated and presented within this paper.

  6. Reliable Portfolio Selection Problem in Fuzzy Environment: An mλ Measure Based Approach

    Directory of Open Access Journals (Sweden)

    Yuan Feng

    2017-04-01

    Full Text Available This paper investigates a fuzzy portfolio selection problem with guaranteed reliability, in which the fuzzy variables are used to capture the uncertain returns of different securities. To effectively handle the fuzziness in a mathematical way, a new expected value operator and variance of fuzzy variables are defined based on the m λ measure that is a linear combination of the possibility measure and necessity measure to balance the pessimism and optimism in the decision-making process. To formulate the reliable portfolio selection problem, we particularly adopt the expected total return and standard variance of the total return to evaluate the reliability of the investment strategies, producing three risk-guaranteed reliable portfolio selection models. To solve the proposed models, an effective genetic algorithm is designed to generate the approximate optimal solution to the considered problem. Finally, the numerical examples are given to show the performance of the proposed models and algorithm.

  7. On-line determination of operating limits incorporating constraint costs and reliability assessment

    International Nuclear Information System (INIS)

    Meisingset, M.; Lovas, G. G.

    1997-01-01

    Problems regarding power system operation following deregulation were discussed. The problems arise as a result of the increased power flow pattern created by deregulation and competitive power markets, resulting in power in excess of N-1, (the capacity of transmission lines available), which in turn creates bottlenecks. In a situation like this, constraint costs and security costs (i.e. the cost of supply interruptions) are incurred as the direct result of the deterministic criteria used in reliability assessment. This paper describes an on-line probabilistic method to determine operating limits based on a trade-off between constraint costs and security costs. The probability of the contingencies depend on the existing weather conditions, which therefore has significant impact on the calculated operating limit. In consequence, the proposed method allows power flow to exceed the N-1 limit during normal weather. Under adverse weather conditions the N-1 criteria should be maintained. 15 refs., 13 figs

  8. Reliability Assessment of Wind Farm Electrical System Based on a Probability Transfer Technique

    Directory of Open Access Journals (Sweden)

    Hejun Yang

    2018-03-01

    Full Text Available The electrical system of a wind farm has a significant influence on the wind farm reliability and electrical energy yield. The disconnect switch installed in an electrical system cannot only improve the operating flexibility, but also enhance the reliability for a wind farm. Therefore, this paper develops a probabilistic transfer technique for integrating the electrical topology structure, the isolation operation of disconnect switch, and stochastic failure of electrical equipment into the reliability assessment of wind farm electrical system. Firstly, as the traditional two-state reliability model of electrical equipment cannot consider the isolation operation, so the paper develops a three-state reliability model to replace the two-state model for incorporating the isolation operation. In addition, a proportion apportion technique is presented to evaluate the state probability. Secondly, this paper develops a probabilistic transfer technique based on the thoughts that through transfer the unreliability of electrical system to the energy transmission interruption of wind turbine generators (WTGs. Finally, some novel indices for describing the reliability of wind farm electrical system are designed, and the variance coefficient of the designed indices is used as a convergence criterion to determine the termination of the assessment process. The proposed technique is applied to the reliability assessment of a wind farm with the different topologies. The simulation results show that the proposed techniques are effective in practical applications.

  9. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  10. Structural reliability analysis based on the cokriging technique

    International Nuclear Information System (INIS)

    Zhao Wei; Wang Wei; Dai Hongzhe; Xue Guofeng

    2010-01-01

    Approximation methods are widely used in structural reliability analysis because they are simple to create and provide explicit functional relationships between the responses and variables in stead of the implicit limit state function. Recently, the kriging method which is a semi-parameter interpolation technique that can be used for deterministic optimization and structural reliability has gained popularity. However, to fully exploit the kriging method, especially in high-dimensional problems, a large number of sample points should be generated to fill the design space and this can be very expensive and even impractical in practical engineering analysis. Therefore, in this paper, a new method-the cokriging method, which is an extension of kriging, is proposed to calculate the structural reliability. cokriging approximation incorporates secondary information such as the values of the gradients of the function being approximated. This paper explores the use of the cokriging method for structural reliability problems by comparing it with the Kriging method based on some numerical examples. The results indicate that the cokriging procedure described in this work can generate approximation models to improve on the accuracy and efficiency for structural reliability problems and is a viable alternative to the kriging.

  11. Influence Of Inspection Intervals On Mechanical System Reliability

    International Nuclear Information System (INIS)

    Zilberman, B.

    1998-01-01

    In this paper a methodology of reliability analysis of mechanical systems with latent failures is described. Reliability analysis of such systems must include appropriate usage of check intervals for latent failure detection. The methodology suggests, that based on system logic the analyst decides at the beginning if a system can fail actively or latently and propagates this approach through all system levels. All inspections are assumed to be perfect (all failures are detected and repaired and no new failures are introduced as a result of the maintenance). Additional assumptions are that mission time is much smaller, than check intervals and all components have constant failure rates. Analytical expressions for reliability calculates are provided, based on fault tree and Markov modeling techniques (for two and three redundant systems with inspection intervals). The proposed methodology yields more accurate results than are obtained by not using check intervals or using half check interval times. The conventional analysis assuming that at the beginning of each mission system is as new, give an optimistic prediction of system reliability. Some examples of reliability calculations of mechanical systems with latent failures and establishing optimum check intervals are provided

  12. PSA applications and piping reliability analysis: where do we stand?

    International Nuclear Information System (INIS)

    Lydell, B.O.Y.

    1997-01-01

    This reviews a recently proposed framework for piping reliability analysis. The framework was developed to promote critical interpretations of operational data on pipe failures, and to support application-specific-parameter estimation

  13. Material and design considerations of FBGA reliability performance

    International Nuclear Information System (INIS)

    Lee, Teck Kheng; Ng, T.C.; Chai, Y.M.

    2004-01-01

    FBGA package reliability is usually assessed through the conventional approaches of die attach and mold compound material optimization. However, with the rapid changes and fast-moving pace of electronic packaging and the introduction of new soldermask and core materials, substrate design has also become a critical factor in determining overall package reliability. The purpose of this paper is to understand the impact design and soldermask material of a rigid substrate on overall package reliability. Three different soldermask patterns with a matrix of different die attach, mold compound, and soldermask materials are assessed using the moisture sensitivity test (MST). Package reliability is also assessed through the use of temperature cycling (T/C) at conditions 'B' and 'C'. For material optimization, three different mold compounds and die attach materials are used. Material adhesion between different die attach materials and soldermask materials are obtained through die shear performed at various temperatures and preset moisture conditions. A study correlating the different packaging material properties and their relative adhesion strengths with overall package reliability in terms of both MST and T/C performance was performed. Soldermask design under the die pads was found to affect package reliability. For example, locating vias at the edge of the die is not desirable because the vias acts as initiation point for delamination and moisture-induced failure. Through die shear testing, soldermask B demonstrated higher adhesion properties compared to soldermask A across several packaging materials and enhanced the overall package reliability in terms of both MST and T/C performance. Both MST JEDEC level 1 and the T/C of 'B' and 'C' at 1000 cycles have been achieved through design and package material optimization

  14. A Fast Optimization Method for Reliability and Performance of Cloud Services Composition Application

    Directory of Open Access Journals (Sweden)

    Zhao Wu

    2013-01-01

    Full Text Available At present the cloud computing is one of the newest trends of distributed computation, which is propelling another important revolution of software industry. The cloud services composition is one of the key techniques in software development. The optimization for reliability and performance of cloud services composition application, which is a typical stochastic optimization problem, is confronted with severe challenges due to its randomness and long transaction, as well as the characteristics of the cloud computing resources such as openness and dynamic. The traditional reliability and performance optimization techniques, for example, Markov model and state space analysis and so forth, have some defects such as being too time consuming and easy to cause state space explosion and unsatisfied the assumptions of component execution independence. To overcome these defects, we propose a fast optimization method for reliability and performance of cloud services composition application based on universal generating function and genetic algorithm in this paper. At first, a reliability and performance model for cloud service composition application based on the multiple state system theory is presented. Then the reliability and performance definition based on universal generating function is proposed. Based on this, a fast reliability and performance optimization algorithm is presented. In the end, the illustrative examples are given.

  15. Multi-objective reliability redundancy allocation in an interval environment using particle swarm optimization

    International Nuclear Information System (INIS)

    Zhang, Enze; Chen, Qingwei

    2016-01-01

    Most of the existing works addressing reliability redundancy allocation problems are based on the assumption of fixed reliabilities of components. In real-life situations, however, the reliabilities of individual components may be imprecise, most often given as intervals, under different operating or environmental conditions. This paper deals with reliability redundancy allocation problems modeled in an interval environment. An interval multi-objective optimization problem is formulated from the original crisp one, where system reliability and cost are simultaneously considered. To render the multi-objective particle swarm optimization (MOPSO) algorithm capable of dealing with interval multi-objective optimization problems, a dominance relation for interval-valued functions is defined with the help of our newly proposed order relations of interval-valued numbers. Then, the crowding distance is extended to the multi-objective interval-valued case. Finally, the effectiveness of the proposed approach has been demonstrated through two numerical examples and a case study of supervisory control and data acquisition (SCADA) system in water resource management. - Highlights: • We model the reliability redundancy allocation problem in an interval environment. • We apply the particle swarm optimization directly on the interval values. • A dominance relation for interval-valued multi-objective functions is defined. • The crowding distance metric is extended to handle imprecise objective functions.

  16. Increasing reliability of nuclear energy equipment and at nuclear power plants

    International Nuclear Information System (INIS)

    Ochrana, L.

    1997-01-01

    The Institute of Nuclear Energy at the Technical University in Brno cooperates with nuclear power plants in increasing their reliability. The teaching programme is briefly described. The scientific research programme of the Department of Heat and Nuclear Power Energy Equipment in the field of reliability is based on a complex systematic concept securing a high level of reliability. In 1996 the Department prepared a study dealing with the evaluation of the maintenance system in a nuclear power plant. The proposed techniques make it possible to evaluate the reliability and maintenance characteristics of any individual component in a nuclear power plant, and to monitor, record and evaluate data at any given time intervals. (M.D.)

  17. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Antônio Dâmaso

    2017-11-01

    Full Text Available Power consumption is a primary interest in Wireless Sensor Networks (WSNs, and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.

  18. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    Science.gov (United States)

    Dâmaso, Antônio; Maciel, Paulo

    2017-01-01

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way. PMID:29113078

  19. Non-periodic preventive maintenance with reliability thresholds for complex repairable systems

    International Nuclear Information System (INIS)

    Lin, Zu-Liang; Huang, Yeu-Shiang; Fang, Chih-Chiang

    2015-01-01

    In general, a non-periodic condition-based PM policy with different condition variables is often more effective than a periodic age-based policy for deteriorating complex repairable systems. In this study, system reliability is estimated and used as the condition variable, and three reliability-based PM models are then developed with consideration of different scenarios which can assist in evaluating the maintenance cost for each scenario. The proposed approach provides the optimal reliability thresholds and PM schedules in advance by which the system availability and quality can be ensured and the organizational resources can be well prepared and managed. The results of the sensitivity anlysis indicate that PM activities performed at a high reliability threshold can not only significantly improve the system availability but also efficiently extend the system lifetime, although such a PM strategy is more costly than that for a low reliabiltiy threshold. The optimal reliability threshold increases along with the number of PM activities to prevent future breakdowns caused by severe deterioration, and thus substantially reduces repair costs. - Highlights: • The PM problems for repairable deteriorating systems are formulated. • The structural properties of the proposed PM models are investigated. • The corresponding algorithms to find the optimal PM strategies are provided. • Imperfect PM activities are allowed to reduce the occurences of breakdowns. • Provide managers with insights about the critical factors in the planning stage

  20. Test-retest reliability of trunk motor variability measured by large-array surface electromyography.

    Science.gov (United States)

    Abboud, Jacques; Nougarou, François; Loranger, Michel; Descarreaux, Martin

    2015-01-01

    The objective of this study was to evaluate the test-retest reliability of the trunk muscle activity distribution in asymptomatic participants during muscle fatigue using large-array surface electromyography (EMG). Trunk muscle activity distribution was evaluated twice, with 3 to 4 days between them, in 27 asymptomatic volunteers using large-array surface EMG. Motor variability, assessed with 2 different variables (the centroid coordinates of the root mean square map and the dispersion variable), was evaluated during a low back muscle fatigue task. Test-retest reliability of muscle activity distribution was obtained using Pearson correlation coefficients. A shift in the distribution of EMG amplitude toward the lateral-caudal region of the lumbar erector spinae induced by muscle fatigue was observed. Moderate to very strong correlations were found between both sessions in the last 3 phases of the fatigue task for both motor variability variables, whereas weak to moderate correlations were found in the first phases of the fatigue task only for the dispersion variable. These findings show that, in asymptomatic participants, patterns of EMG activity are less reliable in initial stages of muscle fatigue, whereas later stages are characterized by highly reliable patterns of EMG activity. Copyright © 2015 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  1. Improved Reliability-Based Optimization with Support Vector Machines and Its Application in Aircraft Wing Design

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2015-01-01

    Full Text Available A new reliability-based design optimization (RBDO method based on support vector machines (SVM and the Most Probable Point (MPP is proposed in this work. SVM is used to create a surrogate model of the limit-state function at the MPP with the gradient information in the reliability analysis. This guarantees that the surrogate model not only passes through the MPP but also is tangent to the limit-state function at the MPP. Then, importance sampling (IS is used to calculate the probability of failure based on the surrogate model. This treatment significantly improves the accuracy of reliability analysis. For RBDO, the Sequential Optimization and Reliability Assessment (SORA is employed as well, which decouples deterministic optimization from the reliability analysis. The improved SVM-based reliability analysis is used to amend the error from linear approximation for limit-state function in SORA. A mathematical example and a simplified aircraft wing design demonstrate that the improved SVM-based reliability analysis is more accurate than FORM and needs less training points than the Monte Carlo simulation and that the proposed optimization strategy is efficient.

  2. Soft computing approach for reliability optimization: State-of-the-art survey

    International Nuclear Information System (INIS)

    Gen, Mitsuo; Yun, Young Su

    2006-01-01

    In the broadest sense, reliability is a measure of performance of systems. As systems have grown more complex, the consequences of their unreliable behavior have become severe in terms of cost, effort, lives, etc., and the interest in assessing system reliability and the need for improving the reliability of products and systems have become very important. Most solution methods for reliability optimization assume that systems have redundancy components in series and/or parallel systems and alternative designs are available. Reliability optimization problems concentrate on optimal allocation of redundancy components and optimal selection of alternative designs to meet system requirement. In the past two decades, numerous reliability optimization techniques have been proposed. Generally, these techniques can be classified as linear programming, dynamic programming, integer programming, geometric programming, heuristic method, Lagrangean multiplier method and so on. A Genetic Algorithm (GA), as a soft computing approach, is a powerful tool for solving various reliability optimization problems. In this paper, we briefly survey GA-based approach for various reliability optimization problems, such as reliability optimization of redundant system, reliability optimization with alternative design, reliability optimization with time-dependent reliability, reliability optimization with interval coefficients, bicriteria reliability optimization, and reliability optimization with fuzzy goals. We also introduce the hybrid approaches for combining GA with fuzzy logic, neural network and other conventional search techniques. Finally, we have some experiments with an example of various reliability optimization problems using hybrid GA approach

  3. The role of high cycle fatigue (HCF) onset in Francis runner reliability

    International Nuclear Information System (INIS)

    Gagnon, M; Tahan, S A; Bocher, P; Thibault, D

    2012-01-01

    High Cycle Fatigue (HCF) plays an important role in Francis runner reliability. This paper presents a model in which reliability is defined as the probability of not exceeding a threshold above which HCF contributes to crack propagation. In the context of combined Low Cycle Fatigue (LCF) and HCF loading, the Kitagawa diagram is used as the limit state threshold for reliability. The reliability problem is solved using First-Order Reliability Methods (FORM). A study case is proposed using in situ measured strains and operational data. All the parameters of the reliability problem are based either on observed data or on typical design specifications. From the results obtained, we observed that the uncertainty around the defect size and the HCF stress range play an important role in reliability. At the same time, we observed that expected values for the LCF stress range and the number of LCF cycles have a significant influence on life assessment, but the uncertainty around these values could be neglected in the reliability assessment.

  4. Modeling cognition dynamics and its application to human reliability analysis

    International Nuclear Information System (INIS)

    Mosleh, A.; Smidts, C.; Shen, S.H.

    1996-01-01

    For the past two decades, a number of approaches have been proposed for the identification and estimation of the likelihood of human errors, particularly for use in the risk and reliability studies of nuclear power plants. Despite the wide-spread use of the most popular among these methods, their fundamental weaknesses are widely recognized, and the treatment of human reliability has been considered as one of the soft spots of risk studies of large technological systems. To alleviate the situation, new efforts have focused on the development of human reliability models based on a more fundamental understanding of operator response and its cognitive aspects

  5. Modifying nodal pricing method considering market participants optimality and reliability

    Directory of Open Access Journals (Sweden)

    A. R. Soofiabadi

    2015-06-01

    Full Text Available This paper develops a method for nodal pricing and market clearing mechanism considering reliability of the system. The effects of components reliability on electricity price, market participants’ profit and system social welfare is considered. This paper considers reliability both for evaluation of market participant’s optimality as well as for fair pricing and market clearing mechanism. To achieve fair pricing, nodal price has been obtained through a two stage optimization problem and to achieve fair market clearing mechanism, comprehensive criteria has been introduced for optimality evaluation of market participant. Social welfare of the system and system efficiency are increased under proposed modified nodal pricing method.

  6. The effect of loss functions on empirical Bayes reliability analysis

    Directory of Open Access Journals (Sweden)

    Vincent A. R. Camara

    1999-01-01

    Full Text Available The aim of the present study is to investigate the sensitivity of empirical Bayes estimates of the reliability function with respect to changing of the loss function. In addition to applying some of the basic analytical results on empirical Bayes reliability obtained with the use of the “popular” squared error loss function, we shall derive some expressions corresponding to empirical Bayes reliability estimates obtained with the Higgins–Tsokos, the Harris and our proposed logarithmic loss functions. The concept of efficiency, along with the notion of integrated mean square error, will be used as a criterion to numerically compare our results.

  7. NDT Reliability - Final Report. Reliability in non-destructive testing (NDT) of the canister components

    Energy Technology Data Exchange (ETDEWEB)

    Pavlovic, Mato; Takahashi, Kazunori; Mueller, Christina; Boehm, Rainer (BAM, Federal Inst. for Materials Research and Testing, Berlin (Germany)); Ronneteg, Ulf (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden))

    2008-12-15

    This report describes the methodology of the reliability investigation performed on the ultrasonic phased array NDT system, developed by SKB in collaboration with Posiva, for inspection of the canisters for permanent storage of nuclear spent fuel. The canister is composed of a cast iron insert surrounded by a copper shell. The shell is composed of the tube and the lid/base which are welded to the tube after the fuel has been place, in the tube. The manufacturing process of the canister parts and the welding process are described. Possible defects, which might arise in the canister components during the manufacturing or in the weld during the welding, are identified. The number of real defects in manufactured components have been limited. Therefore the reliability of the NDT system has been determined using a number of test objects with artificial defects. The reliability analysis is based on the signal response analysis. The conventional signal response analysis is adopted and further developed before applied on the modern ultrasonic phased-array NDT system. The concept of multi-parameter a, where the response of the NDT system is dependent on more than just one parameter, is introduced. The weakness of use of the peak signal response in the analysis is demonstrated and integration of the amplitudes in the C-scan is proposed as an alternative. The calculation of the volume POD, when the part is inspected with more configurations, is also presented. The reliability analysis is supported by the ultrasonic simulation based on the point source synthesis method

  8. Some reliability issues for incomplete two-dimensional warranty claims data

    International Nuclear Information System (INIS)

    Kumar Gupta, Sanjib; De, Soumen; Chatterjee, Aditya

    2017-01-01

    Bivariate reliability and vector bivariate hazard rate or hazard gradient functions are expected to have a role for meaningful assessment of the field performance for items under two-dimensional warranty coverage. In this paper a usage rate based simple class of bivariate reliability function is proposed and various bivariate reliability characteristics are studied for warranty claims data. The utilities of such study are explored with the help of a real life synthetic data. - Highlights: • Independence between age and usage rate is established. • Conditional reliability and hazard gradient along age and usage are determined. • The change point of the hazard gradients is estimated. • The concepts of layered renewal process and NHPP are introduced. • Expected number of renewals and failures at different age-usage cut-offs are obtained.

  9. How to measure distinct components of visual attention fast and reliably

    DEFF Research Database (Denmark)

    Vangkilde, Signe Allerup; Kyllingsbæk, Søren; Habekost, Thomas

    2009-01-01

    Measuring different attentional processes in a fast and reliable way is important in both clinical and experimental settings. However, most tests of visual attention are either lengthy or lack sensitivity, specificity, and reliability. To address this we developed a ten minute test procedure...... for the Swedish Betula-project, a longitudinal study investigating changes in cognitive functions over the adult life span (Nilsson et al., 2004). The test consists of a computer-based letter recognition task with stimulus displays of varied durations followed by pattern masks or a blank screen. The temporal...

  10. Maps and plans reliability in tourism activities

    Directory of Open Access Journals (Sweden)

    Олександр Донцов

    2016-10-01

    Full Text Available The paper is devoted to creation of an effective system of mapping at all levels of tourist-excursion functioning that will boost the promotion of tourist product in a domestic and foreign tourist market. The State Scientific - Production Enterprise «Kartographia» actively participates in cartographic tourism provision by producing travel pieces, survey, large-scale, route maps, atlases, travel guides, city plans. They produce maps covering different content of the territory of Ukraine, its individual regions, cities interested in tourist excursions. The list and scope of cartographic products has been prepared for publication and released for the last five years. The development of new types of tourism encourages publishers to create various cartographic products for the needs of tourists guaranteeing high accuracy, reliability of information, ease of use. A variety of scientific and practical problems in tourism and excursion activities that are solved using maps and plans makes it difficult to determine the criteria for assessing their reliability. The author proposes to introduce the concept of «relevance» - as maps suitability to solving specific problems. The basis of the peer review is suitability of maps for the objective results release criteria: appropriateness of the target maps tasks (area, theme, destination; accuracy of given parameters (projection, scale, height interval; year according to the shooting of location or mapping; selection methods, methods of results measurement processing algorithm; availability of assistive devices (instrumentation, computer technology, simulation devices. These criteria provide the reliability and accuracy of the result as acceptable to consumers as possible. The author proposes a set of measures aimed at improving the content, quality and reliability of cartographic production.

  11. Modeling the bathtub shape hazard rate function in terms of reliability

    International Nuclear Information System (INIS)

    Wang, K.S.; Hsu, F.S.; Liu, P.P.

    2002-01-01

    In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man-machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1-R. This representation denotes the memory characteristics of the second failure cause. Man-machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted 'bathtub' procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons

  12. Reliability-based design code calibration for concrete containment structures

    International Nuclear Information System (INIS)

    Han, B.K.; Cho, H.N.; Chang, S.P.

    1991-01-01

    In this study, a load combination criteria for design and a probability-based reliability analysis were proposed on the basis of a FEM-based random vibration analysis. The limit state model defined for the study is a serviceability limit state of the crack failure that causes the emission of radioactive materials, and the results are compared with the case of strength limit state. More accurate reliability analyses under various dynamic loads such as earthquake loads were made possible by incorporating the FEM and random vibration theory, which is different from the conventional reliability analysis method. The uncertainties in loads and resistance available in Korea and the references were adapted to the situation of Korea, and especially in case of earthquake, the design earthquake was assessed based on the available data for the probabilistic description of earthquake ground acceleration in the Korea peninsula. The SAP V-2 is used for a three-dimensional finite element analysis of concrete containment structure, and the reliability analysis is carried out by modifying HRAS reliability analysis program for this study. (orig./GL)

  13. A Bayesian reliability evaluation method with integrated accelerated degradation testing and field information

    International Nuclear Information System (INIS)

    Wang, Lizhi; Pan, Rong; Li, Xiaoyang; Jiang, Tongmin

    2013-01-01

    Accelerated degradation testing (ADT) is a common approach in reliability prediction, especially for products with high reliability. However, oftentimes the laboratory condition of ADT is different from the field condition; thus, to predict field failure, one need to calibrate the prediction made by using ADT data. In this paper a Bayesian evaluation method is proposed to integrate the ADT data from laboratory with the failure data from field. Calibration factors are introduced to calibrate the difference between the lab and the field conditions so as to predict a product's actual field reliability more accurately. The information fusion and statistical inference procedure are carried out through a Bayesian approach and Markov chain Monte Carlo methods. The proposed method is demonstrated by two examples and the sensitivity analysis to prior distribution assumption

  14. Reliability and fault tolerance in the European ADS project

    International Nuclear Information System (INIS)

    Biarrotte, Jean-Luc

    2013-01-01

    After an introduction to the theory of reliability, this paper focuses on a description of the linear proton accelerator proposed for the European ADS demonstration project. Design issues are discussed and examples of cases of fault tolerance are given. (author)

  15. Impacts of Contingency Reserve on Nodal Price and Nodal Reliability Risk in Deregulated Power Systems

    DEFF Research Database (Denmark)

    Zhao, Qian; Wang, Peng; Goel, Lalit

    2013-01-01

    The deregulation of power systems allows customers to participate in power market operation. In deregulated power systems, nodal price and nodal reliability are adopted to represent locational operation cost and reliability performance. Since contingency reserve (CR) plays an important role...... in reliable operation, the CR commitment should be considered in operational reliability analysis. In this paper, a CR model based on customer reliability requirements has been formulated and integrated into power market settlement. A two-step market clearing process has been proposed to determine generation...

  16. Design methodologies for reliability of SSL LED boards

    NARCIS (Netherlands)

    Jakovenko, J.; Formánek, J.; Perpiñà, X.; Jorda, X.; Vellvehi, M.; Werkhoven, R.J.; Husák, M.; Kunen, J.M.G.; Bancken, P.; Bolt, P.J.; Gasse, A.

    2013-01-01

    This work presents a comparison of various LED board technologies from thermal, mechanical and reliability point of view provided by an accurate 3-D modelling. LED boards are proposed as a possible technology replacement of FR4 LED boards used in 400 lumen retrofit SSL lamps. Presented design

  17. Machine Learning Approach for Software Reliability Growth Modeling with Infinite Testing Effort Function

    Directory of Open Access Journals (Sweden)

    Subburaj Ramasamy

    2017-01-01

    Full Text Available Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP model. We use Artificial Neural Network (ANN for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.

  18. Relative Reliability and the Recognisable Firm

    DEFF Research Database (Denmark)

    Huikku, Jari; Mouritsen, Jan; Silvola, Hanna

    2017-01-01

    This paper complements financial accounting research by a qualitative study of financial accounting practices. Its object is goodwill impairment tests (IAS 36) under the influence of International Financial Reporting Standards, which it uses to illustrate how financial accounting is produced....... The aim is to investigate how accounting standards are translated into accounting practices, and to investigate how this is reliable. Drawing on actor network theory, the paper proposes calculative practices to be a networked and distributed affair. The study has two main contributions. Firstly, it shows...... that in the case of goodwill impairment tests, financial accounting is a process of finding, qualifying, stabilizing and calculating traces that often have to be found beyond the company infrastructure of sheets of accounts and the financial ledger. Secondly, it shows that these traces increase reliability when...

  19. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  20. Reliability evaluation of a port oil transportation system in variable operation conditions

    International Nuclear Information System (INIS)

    Soszynska, Joanna

    2006-01-01

    The semi-Markov model of the system operation processes is proposed and its selected parameters are determined. The series 'm out of k n ' multi-state system is considered and its reliability and risk characteristics are found. Next, the joint model of the system operation process and the system multi-state reliability and risk is constructed. Moreover, reliability and risk evaluation of the multi-state series 'm out of k n ' system in its operation process is applied to the port oil transportation system

  1. The evaluation of equipment and Instrumentation Reliability Factors on Power Reactor

    International Nuclear Information System (INIS)

    Supriatna, Piping; Karlina, Itjeu; Widagdo, Suharyo; Santosa, Kussigit; Darlis; Sudiyono, Bambang; Yuniyanta, Sasongko; Sudarmin

    1999-01-01

    Equipment and instrumentation reliability on type power reactor control room was determined by its pattern and design. the principle of ergonomy applied on equipment and instrumentation layout in this ABWR type reactor are geometric pattern appropriate with economic body motion, average anthropometry data of operator especially operator hand-reach, range of vision, angle of vision, lighting, color arrangement and harmony as will as operator case in operating the equipment system. Limitation criteria of the parameter mentioned above are based on EPRI NP-3659, NURG 0700, and NUREG/CR-3331 documents. Besides that, the (working) physical environment parameter factor of the control room must be designed in order to fulfil the standard criteria of ergonomic condition based on NUREG-0800. The reliability evaluation of equipment and instrumentation system also occurs observed from man machine interaction side which happen between operator and equipment and instrumentation in the ABWR type power reactor control room. From the MMI analysis can be known the working failure possibility which is caused by the operator. The evaluation result of equipment and instrumentation reliability on ABWR type power reactor control room showed that the design of this ABWR control room is good and fulfils the ergonomy standard criteria have been determined

  2. An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs

    Science.gov (United States)

    Basalamah, Anas; Sato, Takuro

    For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.

  3. Design of piezoelectric transducer layer with electromagnetic shielding and high connection reliability

    International Nuclear Information System (INIS)

    Qiu, Lei; Yuan, Shenfang; Shi, Xiaoling; Huang, Tianxiang

    2012-01-01

    Piezoelectric transducer (PZT) and Lamb wave based structural health monitoring (SHM) method have been widely studied for on-line SHM of high-performance structures. To monitor large-scale structures, a dense PZTs array is required. In order to improve the placement efficiency and reduce the wire burden of the PZTs array, the concept of the piezoelectric transducers layer (PSL) was proposed. The PSL consists of PZTs, a flexible interlayer with printed wires and signal input/output interface. For on-line SHM on real aircraft structures, there are two main issues on electromagnetic interference and connection reliability of the PSL. To address the issues, an electromagnetic shielding design method of the PSL to reduce spatial electromagnetic noise and crosstalk is proposed and a combined welding–cementation process based connection reliability design method is proposed to enhance the connection reliability between the PZTs and the flexible interlayer. Two experiments on electromagnetic interference suppression are performed to validate the shielding design of the PSL. The experimental results show that the amplitudes of the spatial electromagnetic noise and crosstalk output from the shielded PSL developed by this paper are − 15 dB and − 25 dB lower than those of the ordinary PSL, respectively. Other two experiments on temperature durability ( − 55 °C–80 °C ) and strength durability (160–1600με, one million load cycles) are applied to the PSL to validate the connection reliability. The low repeatability errors (less than 3% and less than 5%, respectively) indicate that the developed PSL is of high connection reliability and long fatigue life. (paper)

  4. Reliability Evaluation of Bridges Based on Nonprobabilistic Response Surface Limit Method

    OpenAIRE

    Chen, Xuyong; Chen, Qian; Bian, Xiaoya; Fan, Jianping

    2017-01-01

    Due to many uncertainties in nonprobabilistic reliability assessment of bridges, the limit state function is generally unknown. The traditional nonprobabilistic response surface method is a lengthy and oscillating iteration process and leads to difficultly solving the nonprobabilistic reliability index. This article proposes a nonprobabilistic response surface limit method based on the interval model. The intention of this method is to solve the upper and lower limits of the nonprobabilistic ...

  5. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management

    DEFF Research Database (Denmark)

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E.

    2016-01-01

    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems...... of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management....

  6. Quantum theory and the emergence of patterns in the universe

    International Nuclear Information System (INIS)

    Stapp, H.P.

    1989-11-01

    The topic of this symposium is the quest to discover, define, and interpret patterns in the universe. This quest has two parts. To discover and define these patterns is the task of science: this part of the quest is producing a copious flow of reliable information. To interpret or give meaning to these patterns is the task of natural philosophy: this part has not kept pace

  7. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  8. Structural reliability analysis under evidence theory using the active learning kriging model

    Science.gov (United States)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  9. Toward robust phase-locking in Melibe swim central pattern generator models

    Science.gov (United States)

    Jalil, Sajiya; Allen, Dane; Youker, Joseph; Shilnikov, Andrey

    2013-12-01

    Small groups of interneurons, abbreviated by CPG for central pattern generators, are arranged into neural networks to generate a variety of core bursting rhythms with specific phase-locked states, on distinct time scales, which govern vital motor behaviors in invertebrates such as chewing and swimming. These movements in lower level animals mimic motions of organs in higher animals due to evolutionarily conserved mechanisms. Hence, various neurological diseases can be linked to abnormal movement of body parts that are regulated by a malfunctioning CPG. In this paper, we, being inspired by recent experimental studies of neuronal activity patterns recorded from a swimming motion CPG of the sea slug Melibe leonina, examine a mathematical model of a 4-cell network that can plausibly and stably underlie the observed bursting rhythm. We develop a dynamical systems framework for explaining the existence and robustness of phase-locked states in activity patterns produced by the modeled CPGs. The proposed tools can be used for identifying core components for other CPG networks with reliable bursting outcomes and specific phase relationships between the interneurons. Our findings can be employed for identifying or implementing the conditions for normal and pathological functioning of basic CPGs of animals and artificially intelligent prosthetics that can regulate various movements.

  10. Reliability assessment of distribution system with the integration of renewable distributed generation

    International Nuclear Information System (INIS)

    Adefarati, T.; Bansal, R.C.

    2017-01-01

    Highlights: • Addresses impacts of renewable DG on the reliability of the distribution system. • Multi-objective formulation for maximizing the cost saving with integration of DG. • Uses Markov model to study the stochastic characteristics of the major components. • The investigation is done using modified RBTS bus test distribution system. • Proposed approach is useful for electric utilities to enhance the reliability. - Abstract: Recent studies have shown that renewable energy resources will contribute substantially to future energy generation owing to the rapid depletion of fossil fuels. Wind and solar energy resources are major sources of renewable energy that have the ability to reduce the energy crisis and the greenhouse gases emitted by the conventional power plants. Reliability assessment is one of the key indicators to measure the impact of the renewable distributed generation (DG) units in the distribution networks and to minimize the cost that is associated with power outage. This paper presents a comprehensive reliability assessment of the distribution system that satisfies the consumer load requirements with the penetration of wind turbine generator (WTG), electric storage system (ESS) and photovoltaic (PV). A Markov model is proposed to access the stochastic characteristics of the major components of the renewable DG resources as well as their influence on the reliability of a conventional distribution system. The results obtained from the case studies have demonstrated the effectiveness of using WTG, ESS and PV to enhance the reliability of the conventional distribution system.

  11. Cell segmentation in time-lapse fluorescence microscopy with temporally varying sub-cellular fusion protein patterns.

    Science.gov (United States)

    Bunyak, Filiz; Palaniappan, Kannappan; Chagin, Vadim; Cardoso, M

    2009-01-01

    Fluorescently tagged proteins such as GFP-PCNA produce rich dynamically varying textural patterns of foci distributed in the nucleus. This enables the behavioral study of sub-cellular structures during different phases of the cell cycle. The varying punctuate patterns of fluorescence, drastic changes in SNR, shape and position during mitosis and abundance of touching cells, however, require more sophisticated algorithms for reliable automatic cell segmentation and lineage analysis. Since the cell nuclei are non-uniform in appearance, a distribution-based modeling of foreground classes is essential. The recently proposed graph partitioning active contours (GPAC) algorithm supports region descriptors and flexible distance metrics. We extend GPAC for fluorescence-based cell segmentation using regional density functions and dramatically improve its efficiency for segmentation from O(N(4)) to O(N(2)), for an image with N(2) pixels, making it practical and scalable for high throughput microscopy imaging studies.

  12. Interrater reliability of videotaped observational gait-analysis assessments.

    Science.gov (United States)

    Eastlack, M E; Arvidson, J; Snyder-Mackler, L; Danoff, J V; McGarvey, C L

    1991-06-01

    The purpose of this study was to determine the interrater reliability of videotaped observational gait-analysis (VOGA) assessments. Fifty-four licensed physical therapists with varying amounts of clinical experience served as raters. Three patients with rheumatoid arthritis who demonstrated an abnormal gait pattern served as subjects for the videotape. The raters analyzed each patient's most severely involved knee during the four subphases of stance for the kinematic variables of knee flexion and genu valgum. Raters were asked to determine whether these variables were inadequate, normal, or excessive. The temporospatial variables analyzed throughout the entire gait cycle were cadence, step length, stride length, stance time, and step width. Generalized kappa coefficients ranged from .11 to .52. Intraclass correlation coefficients (2,1) and (3,1) were slightly higher. Our results indicate that physical therapists' VOGA assessments are only slightly to moderately reliable and that improved interrater reliability of the assessments of physical therapists utilizing this technique is needed. Our data suggest that there is a need for greater standardization of gait-analysis training.

  13. Reliable Path Selection Problem in Uncertain Traffic Network after Natural Disaster

    Directory of Open Access Journals (Sweden)

    Jing Wang

    2013-01-01

    Full Text Available After natural disaster, especially for large-scale disasters and affected areas, vast relief materials are often needed. In the meantime, the traffic networks are always of uncertainty because of the disaster. In this paper, we assume that the edges in the network are either connected or blocked, and the connection probability of each edge is known. In order to ensure the arrival of these supplies at the affected areas, it is important to select a reliable path. A reliable path selection model is formulated, and two algorithms for solving this model are presented. Then, adjustable reliable path selection model is proposed when the edge of the selected reliable path is broken. And the corresponding algorithms are shown to be efficient both theoretically and numerically.

  14. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  15. Reliability of pulse diagnosis in traditional Indian Ayurveda medicine

    DEFF Research Database (Denmark)

    Kurande, Vrinda Hitendra; Waagepetersen, Rasmus; Toft, Egon

    2013-01-01

    In Ayurveda, pulse diagnosis is an important diagnostic method to assess the status of three doshas (bio-entity: vata, pitta and kapha) in the patient. However, this is only justifiable if this method is reliable. The aim of this study is to test the intra-rater and inter-rater reliability of pulse...... diagnosed various combinations of three bio-entities vata, pitta and kapha based on the qualitative description of pulse pattern in Ayurveda. Cohen's weighted kappa statistic was used as a measure of reliability and hypothesis of homogeneous diagnosis (random rating) was tested. The level of weighted kappa...... statistics for each doctor was -0.18, 0.12, 0.31, -0.02, 0.48, 0.1, 0.26, 0.2, 0.34, 0.15, 0.56, 0.03, 0.36, 0.21, 0.4 respectively and the hypothesis of homogeneous diagnosis was only significant (p = 0.04) at the 5 % level for one doctor. The kappa values are in general bigger for the group...

  16. A Squeezed Artificial Neural Network for the Symbolic Network Reliability Functions of Binary-State Networks.

    Science.gov (United States)

    Yeh, Wei-Chang

    Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.

  17. Material and design considerations of FBGA reliability performance

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Teck Kheng; Ng, T.C.; Chai, Y.M

    2004-09-01

    FBGA package reliability is usually assessed through the conventional approaches of die attach and mold compound material optimization. However, with the rapid changes and fast-moving pace of electronic packaging and the introduction of new soldermask and core materials, substrate design has also become a critical factor in determining overall package reliability. The purpose of this paper is to understand the impact design and soldermask material of a rigid substrate on overall package reliability. Three different soldermask patterns with a matrix of different die attach, mold compound, and soldermask materials are assessed using the moisture sensitivity test (MST). Package reliability is also assessed through the use of temperature cycling (T/C) at conditions 'B' and 'C'. For material optimization, three different mold compounds and die attach materials are used. Material adhesion between different die attach materials and soldermask materials are obtained through die shear performed at various temperatures and preset moisture conditions. A study correlating the different packaging material properties and their relative adhesion strengths with overall package reliability in terms of both MST and T/C performance was performed. Soldermask design under the die pads was found to affect package reliability. For example, locating vias at the edge of the die is not desirable because the vias acts as initiation point for delamination and moisture-induced failure. Through die shear testing, soldermask B demonstrated higher adhesion properties compared to soldermask A across several packaging materials and enhanced the overall package reliability in terms of both MST and T/C performance. Both MST JEDEC level 1 and the T/C of 'B' and 'C' at 1000 cycles have been achieved through design and package material optimization.

  18. High-Efficiency Reliable Stirling Generator for Space Exploration Missions, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA needs advanced power-conversion technologies to improve the efficiency and reliability of power conversion for space exploration missions. We propose to develop...

  19. Reliability Approach of a Compressor System using Reliability Block ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... This paper presents a reliability analysis of such a system using reliability ... Keywords-compressor system, reliability, reliability block diagram, RBD .... the same structure has been kept with the three subsystems: air flow, oil flow and .... and Safety in Engineering Design", Springer, 2009. [3] P. O'Connor ...

  20. RELIABLE DYNAMIC SOURCE ROUTING PROTOCOL (RDSRP FOR ENERGY HARVESTING WIRELESS SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    B. Narasimhan

    2015-03-01

    Full Text Available Wireless sensor networks (WSNs carry noteworthy pros over traditional communication. Though, unkind and composite environments fake great challenges in the reliability of WSN communications. It is more vital to develop a reliable unipath dynamic source routing protocol (RDSRPl for WSN to provide better quality of service (QoS in energy harvesting wireless sensor networks (EH-WSN. This paper proposes a dynamic source routing approach for attaining the most reliable route in EH-WSNs. Performance evaluation is carried out using NS-2 and throughput and packet delivery ratio are chosen as the metrics.

  1. Reliability evaluation of a port oil transportation system in variable operation conditions

    Energy Technology Data Exchange (ETDEWEB)

    Soszynska, Joanna [Department of Mathematics, Gdynia Maritime University, ul. Morska 83, 81-225 Gdynia (Poland)]. E-mail: joannas@am.gdynia.pl

    2006-04-15

    The semi-Markov model of the system operation processes is proposed and its selected parameters are determined. The series 'm out of k {sub n}' multi-state system is considered and its reliability and risk characteristics are found. Next, the joint model of the system operation process and the system multi-state reliability and risk is constructed. Moreover, reliability and risk evaluation of the multi-state series 'm out of k {sub n}' system in its operation process is applied to the port oil transportation system.

  2. Reliable and Efficient Communications in Wireless Sensor Networks

    International Nuclear Information System (INIS)

    Abdelhakim, M.M.

    2014-01-01

    Wireless sensor network (WSN) is a key technology for a wide range of military and civilian applications. Limited by the energy resources and processing capabilities of the sensor nodes, reliable and efficient communications in wireless sensor networks are challenging, especially when the sensors are deployed in hostile environments. This research aims to improve the reliability and efficiency of time-critical communications in WSNs, under both benign and hostile environments. We start with wireless sensor network with mobile access points (SENMA), where the mobile access points traverse the network to collect information from individual sensors. Due to its routing simplicity and energy efficiency, SENMA has attracted lots of attention from the research community. Here, we study reliable distributed detection in SENMA under Byzantine attacks, where some authenticated sensors are compromised to report fictitious information. The q-out-of-m rule is considered. It is popular in distributed detection and can achieve a good trade-off between the miss detection probability and the false alarm rate. However, a major limitation with this rule is that the optimal scheme parameters can only be obtained through exhaustive search. By exploiting the linear relationship between the scheme parameters and the network size, we propose simple but effective sub-optimal linear approaches. Then, for better flexibility and scalability, we derive a near-optimal closed-form solution based on the central limit theorem. It is proved that the false alarm rate of the q-out-of-m scheme diminishes exponentially as the network size increases, even if the percentage of malicious nodes remains fixed. This implies that large-scale sensor networks are more reliable under malicious attacks. To further improve the performance under time varying attacks, we propose an effective malicious node detection scheme for adaptive data fusion; the proposed scheme is analyzed using the entropy-based trust model

  3. Predicting Flow Breakdown Probability and Duration in Stochastic Network Models: Impact on Travel Time Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Jing [ORNL; Mahmassani, Hani S. [Northwestern University, Evanston

    2011-01-01

    This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.

  4. System reliability evaluation of a touch panel manufacturing system with defect rate and reworking

    International Nuclear Information System (INIS)

    Lin, Yi-Kuei; Huang, Cheng-Fu; Chang, Ping-Chen

    2013-01-01

    In recent years, portable consumer electronic products, such as cell phone, GPS, digital camera, tablet PC, and notebook are using touch panel as interface. With the demand of touch panel increases, performance assessment is essential for touch panel production. This paper develops a method to evaluate system reliability of a touch panel manufacturing system (TPMS) with defect rate of each workstation and takes reworking actions into account. The system reliability which evaluates the possibility of demand satisfaction can provide to managers with an understanding of the system capability and can indicate possible improvements. First, we construct a capacitated manufacturing network (CMN) for a TPMS. Second, a decomposition technique is developed to determine the input flow of each workstation based on the CMN. Finally, we generate the minimal capacity vectors that should be provided to satisfy the demand. The system reliability is subsequently evaluated in terms of the minimal capacity vectors. A further decision making issue is discussed to decide a reliable production strategy. -- Graphical abstract: The proposed procedure to evaluate system reliability of the touch panel manufacturing system (TPMS). Highlights: • The system reliability of a touch panel manufacturing system (TPMS) is evaluated. • The reworking actions are taken into account in the TPMS. • A capacitated manufacturing network is constructed for the TPMS. • A procedure is proposed to evaluate system reliability of TPMS

  5. Research on reliability management systems for Nuclear Power Plant

    International Nuclear Information System (INIS)

    Maki, Nobuo

    2000-01-01

    Investigation on a reliability management system for Nuclear Power Plants (NPPs) has been performed on national and international archived documents as well as on current status of studies at Idaho National Engineering and Environmental Laboratory (INEEL), US NPPs (McGuire, Seabrook), a French NPP (St. Laurent-des-Eaux), Japan Atomic Energy Research Institute (JAERI), Central Research Institute of Electric Power Industries (CRIEPI), and power plant manufacturers in Japan. As a result of the investigation, the following points were identified: (i) A reliability management system is composed of a maintenance management system to inclusively manage maintenance data, and an anomalies information and reliability data management system to extract data from maintenance results stored in the maintenance management system and construct a reliability database. (ii) The maintenance management system, which is widely-used among NPPs in the US and Europe, is an indispensable system for the increase of maintenance reliability. (iii) Maintenance management methods utilizing reliability data like Reliability Centered Maintenance are applied for NPP maintenance in the US and Europe, and contributing to cost saving. Maintenance templates are effective in the application process. In addition, the following points were proposed on the design of the system: (i) A detailed database on specifications of facilities and components is necessary for the effective use of the system. (ii) A demand database is indispensable for the application of the methods. (iii) Full-time database managers are important to maintain the quality of the reliability data. (author)

  6. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  7. Sonographic Diagnosis of Tubal Cancer with IOTA Simple Rules Plus Pattern Recognition

    Science.gov (United States)

    Tongsong, Theera; Wanapirak, Chanane; Tantipalakorn, Charuwan; Tinnangwattana, Dangcheewan

    2017-11-26

    Objective: To evaluate diagnostic performance of IOTA simple rules plus pattern recognition in predicting tubal cancer. Methods: Secondary analysis was performed on prospective database of our IOTA project. The patients recruited in the project were those who were scheduled for pelvic surgery due to adnexal masses. The patients underwent ultrasound examinations within 24 hours before surgery. On ultrasound examination, the masses were evaluated using the well-established IOTA simple rules plus pattern recognition (sausage-shaped appearance, incomplete septum, visible ipsilateral ovaries) to predict tubal cancer. The gold standard diagnosis was based on histological findings or operative findings. Results: A total of 482 patients, including 15 cases of tubal cancer, were evaluated by ultrasound preoperatively. The IOTA simple rules plus pattern recognition gave a sensitivity of 86.7% (13 in 15) and specificity of 97.4%. Sausage-shaped appearance was identified in nearly all cases (14 in 15). Incomplete septa and normal ovaries could be identified in 33.3% and 40%, respectively. Conclusion: IOTA simple rules plus pattern recognition is relatively effective in predicting tubal cancer. Thus, we propose the simple scheme in diagnosis of tubal cancer as follows. First of all, the adnexal masses are evaluated with IOTA simple rules. If the B-rules could be applied, tubal cancer is reliably excluded. If the M-rules could be applied or the result is inconclusive, careful delineation of the mass with pattern recognition should be performed. Creative Commons Attribution License

  8. Proposed diagnostic criteria for internet addiction.

    Science.gov (United States)

    Tao, Ran; Huang, Xiuqin; Wang, Jinan; Zhang, Huimin; Zhang, Ying; Li, Mengchen

    2010-03-01

    The objective of this study was to develop diagnostic criteria for internet addiction disorder (IAD) and to evaluate the validity of our proposed diagnostic criteria for discriminating non-dependent from dependent internet use in the general population. This study was conducted in three stages: the developmental stage (110 subjects in the survey group; 408 subjects in the training group), where items of the proposed diagnostic criteria were developed and tested; the validation stage (n = 405), where the proposed criteria were evaluated for criterion-related validity; and the clinical stage (n = 150), where the criteria and the global clinical impression of IAD were evaluated by more than one psychiatrist to determine inter-rater reliability. The proposed internet addiction diagnostic criteria consisted of symptom criterion (seven clinical symptoms of IAD), clinically significant impairment criterion (functional and psychosocial impairments), course criterion (duration of addiction lasting at least 3 months, with at least 6 hours of non-essential internet usage per day) and exclusion criterion (exclusion of dependency attributed to psychotic disorders). A diagnostic score of 2 + 1, where the first two symptoms (preoccupation and withdrawal symptoms) and at least one of the five other symptoms (tolerance, lack of control, continued excessive use despite knowledge of negative effects/affects, loss of interests excluding internet, and use of the internet to escape or relieve a dysphoric mood) was established. Inter-rater reliability was 98%. Our findings suggest that the proposed diagnostic criteria may be useful for the standardization of diagnostic criteria for IAD.

  9. Optimal design of water supply networks for enhancing seismic reliability

    International Nuclear Information System (INIS)

    Yoo, Do Guen; Kang, Doosun; Kim, Joong Hoon

    2016-01-01

    The goal of the present study is to construct a reliability evaluation model of a water supply system taking seismic hazards and present techniques to enhance hydraulic reliability of the design into consideration. To maximize seismic reliability with limited budgets, an optimal design model is developed using an optimization technique called harmony search (HS). The model is applied to actual water supply systems to determine pipe diameters that can maximize seismic reliability. The reliabilities between the optimal design and existing designs were compared and analyzed. The optimal design would both enhance reliability by approximately 8.9% and have a construction cost of approximately 1.3% less than current pipe construction cost. In addition, the reinforcement of the durability of individual pipes without considering the system produced ineffective results in terms of both cost and reliability. Therefore, to increase the supply ability of the entire system, optimized pipe diameter combinations should be derived. Systems in which normal status hydraulic stability and abnormal status available demand could be maximally secured if configured through the optimal design. - Highlights: • We construct a seismic reliability evaluation model of water supply system. • We present technique to enhance hydraulic reliability in the aspect of design. • Harmony search algorithm is applied in optimal designs process. • The effects of the proposed optimal design are improved reliability about by 9%. • Optimized pipe diameter combinations should be derived indispensably.

  10. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  11. Reliability Evaluation and Improvement Approach of Chemical Production Man - Machine - Environment System

    Science.gov (United States)

    Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng

    2017-12-01

    In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.

  12. Towards early software reliability prediction for computer forensic tools (case study).

    Science.gov (United States)

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  13. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  14. Reliability of a coordinate system based on anatomical landmarks of the maxillofacial skeleton. An evaluation method for three-dimensional images obtained by cone-beam computed tomography

    International Nuclear Information System (INIS)

    Kimura, Momoko; Nawa, Hiroyuki; Yoshida, Kazuhito; Muramatsu, Atsushi; Fuyamada, Mariko; Goto, Shigemi; Ariji, Eiichiro; Tokumori, Kenji; Katsumata, Akitoshi

    2009-01-01

    We propose a method for evaluating the reliability of a coordinate system based on maxillofacial skeletal landmarks and use it to assess two coordinate systems. Scatter plots and 95% confidence ellipses of an objective landmark were defined as an index for demonstrating the stability of the coordinate system. A head phantom was positioned horizontally in reference to the Frankfurt horizontal and occlusal planes and subsequently scanned once in each position using cone-beam computed tomography. On the three-dimensional images created with a volume-rendering procedure, six dentists twice set two different coordinate systems: coordinate system 1 was defined by the nasion, sella, and basion, and coordinate system 2 was based on the left orbitale, bilateral porions, and basion. The menton was assigned as an objective landmark. The scatter plot and 95% ellipse of the menton indicated the high-level reliability of coordinate system 2. The patterns with the two coordinate systems were similar between data obtained in different head positions. The method presented here may be effective for evaluating the reliability (reproducibility) of coordinate systems based on skeletal landmarks. (author)

  15. Blackout risk prevention in a smart grid based flexible optimal strategy using Grey Wolf-pattern search algorithms

    International Nuclear Information System (INIS)

    Mahdad, Belkacem; Srairi, K.

    2015-01-01

    Highlights: • A generalized optimal security power system planning strategy for blackout risk prevention is proposed. • A Grey Wolf Optimizer dynamically coordinated with Pattern Search algorithm is proposed. • A useful optimized database dynamically generated considering margin loading stability under severe faults. • The robustness and feasibility of the proposed strategy is validated in the standard IEEE 30 Bus system. • The proposed planning strategy will be useful for power system protection coordination and control. - Abstract: Developing a flexible and reliable power system planning strategy under critical situations is of great importance to experts and industrials to minimize the probability of blackouts occurrence. This paper introduces the first stage of this practical strategy by the application of Grey Wolf Optimizer coordinated with pattern search algorithm for solving the security smart grid power system management under critical situations. The main objective of this proposed planning strategy is to prevent the practical power system against blackout due to the apparition of faults in generating units or important transmission lines. At the first stage the system is pushed to its margin stability limit, the critical loads shedding are selected using voltage stability index. In the second stage the generator control variables, the reactive power of shunt and dynamic compensators are adjusted in coordination with minimization the active and reactive power at critical loads to maintain the system at security state to ensure service continuity. The feasibility and efficiency of the proposed strategy is applied to IEEE 30-Bus test system. Results are promising and prove the practical efficiency of the proposed strategy to ensure system security under critical situations

  16. An adaptive cubature formula for efficient reliability assessment of nonlinear structural dynamic systems

    Science.gov (United States)

    Xu, Jun; Kong, Fan

    2018-05-01

    Extreme value distribution (EVD) evaluation is a critical topic in reliability analysis of nonlinear structural dynamic systems. In this paper, a new method is proposed to obtain the EVD. The maximum entropy method (MEM) with fractional moments as constraints is employed to derive the entire range of EVD. Then, an adaptive cubature formula is proposed for fractional moments assessment involved in MEM, which is closely related to the efficiency and accuracy for reliability analysis. Three point sets, which include a total of 2d2 + 1 integration points in the dimension d, are generated in the proposed formula. In this regard, the efficiency of the proposed formula is ensured. Besides, a "free" parameter is introduced, which makes the proposed formula adaptive with the dimension. The "free" parameter is determined by arranging one point set adjacent to the boundary of the hyper-sphere which contains the bulk of total probability. In this regard, the tail distribution may be better reproduced and the fractional moments could be evaluated with accuracy. Finally, the proposed method is applied to a ten-storey shear frame structure under seismic excitations, which exhibits strong nonlinearity. The numerical results demonstrate the efficacy of the proposed method.

  17. A Novel Reliable WDM-PON System

    Science.gov (United States)

    Chen, Benyang; Gan, Chaoqin; Qi, Yongqian; Xia, Lei

    2011-12-01

    In this paper, a reliable Wavelength-Division-Multiplexing Passive Optical Network (WDM-PON) system is proposed. It can provide the protection against both the feeder fiber failure and the distribution fiber failure. When the fiber failure occurs, the corresponding switches in the OLT and in the ONU can switch to the protection link without affecting the users in normal status. That is to say, the protection for one ONU is independent of the other ONUs.

  18. Typhoid outbreak in Songkhla, Thailand 2009-2011: clinical outcomes, susceptibility patterns, and reliability of serology tests.

    Directory of Open Access Journals (Sweden)

    Wannee Limpitikul

    Full Text Available OBJECTIVE: To determine the clinical manifestations and outcomes, the reliability of Salmonella enterica serotype Typhi (S ser. Typhi IgM and IgG rapid tests, and the susceptibility patterns and the response to treatment during the 2009-2011 typhoid outbreak in Songkhla province in Thailand. METHOD: The medical records of children aged <15 years with S ser. Typhi bacteremia were analysed. The efficacy of the typhoid IgM and IgG rapid tests and susceptibility of the S ser. Typhi to the current main antibiotics used for typhoid (amoxicillin, ampicillin, cefotaxime, ceftriaxone, co-trimoxazole, and ciprofloxacin, were evaluated. RESULTS: S ser. Typhi bacteremia was found in 368 patients, and all isolated strains were susceptible to all 6 antimicrobials tested. Most of the patients were treated with ciprofloxacin for 7-14 days. The median time (IQR of fever before treatment and duration of fever after treatment were 5 (4, 7 days and 4 (3, 5 days, respectively. Complications of ascites, lower respiratory symptoms, anemia (Hct <30%, and ileal perforation were found in 7, 7, 22, and 1 patients, respectively. None of the patients had recurrent infection or died. The sensitivities of the typhoid IgM and IgG tests were 58.3% and 25.6% respectively, and specificities were 74.1% and 50.5%, respectively. CONCLUSION: Most of the patients were diagnosed at an early stage and treated with a good outcome. All S ser. Typhi strains were susceptible to standard first line antibiotic typhoid treatment. The typhoid IgM and IgG rapid tests had low sensitivity and moderate specificity.

  19. Reliability-cost models for the power switching devices of wind power converters

    DEFF Research Database (Denmark)

    Ma, Ke; Blaabjerg, Frede

    2012-01-01

    In order to satisfy the growing reliability requirements for the wind power converters with more cost-effective solution, the target of this paper is to establish a new reliability-cost model which can connect the relationship between reliability performances and corresponding semiconductor cost...... temperature mean value Tm and fluctuation amplitude ΔTj of power devices, are presented. With the proposed reliability-cost model, it is possible to enable future reliability-oriented design of the power switching devices for wind power converters, and also an evaluation benchmark for different wind power...... for power switching devices. First the conduction loss, switching loss as well as thermal impedance models of power switching devices (IGBT module) are related to the semiconductor chip number information respectively. Afterwards simplified analytical solutions, which can directly extract the junction...

  20. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  1. Reliability and validity of videotaped functional performance tests in ACL-injured subjects

    DEFF Research Database (Denmark)

    von Porat, Anette; Holmström, Eva; Roos, Ewa

    2008-01-01

    BACKGROUND AND PURPOSE: In clinical practice, visual observation is often used to determine functional impairment and to evaluate treatment following a knee injury. The aim of this study was to evaluate the reliability and validity of observational assessments of knee movement pattern quality......, crossover hop on one leg and one-leg hop. The videos were observed by four physiotherapists, and the knee movement pattern quality, a feature of the loading strategy of the lower extremity, was scored on an 11-point rating scale. To assess the criterion validity, the observational rating was correlated...... obtained between the observers' assessment and knee flexion angle, r = 0.37-0.61. The crossover hop test or one-leg hop test was ranked as the most useful test in 172 of 192 occasions (90%) when assessing knee function. CONCLUSION: The moderate to good inter-observer reliability and the moderate criterion...

  2. Domain knowledge patterns in pedagogical diagnostics

    Science.gov (United States)

    Miarka, Rostislav

    2017-07-01

    This paper shows a proposal of representation of knowledge patterns in RDF(S) language. Knowledge patterns are used for reuse of knowledge. They can be divided into two groups - Top-level knowledge patterns and Domain knowledge patterns. Pedagogical diagnostics is aimed at testing of knowledge of students at primary and secondary school. An example of domain knowledge pattern from pedagogical diagnostics is part of this paper.

  3. Combination of oriented partial differential equation and shearlet transform for denoising in electronic speckle pattern interferometry fringe patterns.

    Science.gov (United States)

    Xu, Wenjun; Tang, Chen; Gu, Fan; Cheng, Jiajia

    2017-04-01

    It is a key step to remove the massive speckle noise in electronic speckle pattern interferometry (ESPI) fringe patterns. In the spatial-domain filtering methods, oriented partial differential equations have been demonstrated to be a powerful tool. In the transform-domain filtering methods, the shearlet transform is a state-of-the-art method. In this paper, we propose a filtering method for ESPI fringe patterns denoising, which is a combination of second-order oriented partial differential equation (SOOPDE) and the shearlet transform, named SOOPDE-Shearlet. Here, the shearlet transform is introduced into the ESPI fringe patterns denoising for the first time. This combination takes advantage of the fact that the spatial-domain filtering method SOOPDE and the transform-domain filtering method shearlet transform benefit from each other. We test the proposed SOOPDE-Shearlet on five experimentally obtained ESPI fringe patterns with poor quality and compare our method with SOOPDE, shearlet transform, windowed Fourier filtering (WFF), and coherence-enhancing diffusion (CEDPDE). Among them, WFF and CEDPDE are the state-of-the-art methods for ESPI fringe patterns denoising in transform domain and spatial domain, respectively. The experimental results have demonstrated the good performance of the proposed SOOPDE-Shearlet.

  4. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    Science.gov (United States)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  5. Revisiting radiation patterns in e{sup +}e{sup -} collisions

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, N.; Gieseke, S. [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany). Inst. fuer Theoretische Physik; Plaetzer, S. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Skands, P. [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2014-02-15

    We propose four simple event-shape variables for semi-inclusive e{sup +}e{sup -}→4-jet events. The observables and cuts are designed to be especially sensitive to subleading aspects of the event structure, and allow to test the reliability of phenomenological QCD models in greater detail. Three of them, θ{sub 14}, θ{sup *}, and C{sup (1/5)}{sub 2}, focus on soft emissions off three-jet topologies with a small opening angle, for which coherence effects beyond the leading QCD dipole pattern are expected to be enhanced. A complementary variable, M{sup 2}{sub L}/M{sup 2}{sub H}, measures the ratio of the hemisphere masses in 4-jet events with a compressed scale hierarchy (Durham y{sub 23}∝y{sub 34}), for which subleading 1→3 splitting effects are expected to be enhanced. We consider several different parton-shower models, spanning both conventional and dipole/antenna ones, all tuned to the same e{sup +}e{sup -} reference data, and show that a measurement of the proposed observables would allow for additional significant discriminating power between the models.

  6. Application of fuzzy-MOORA method: Ranking of components for reliability estimation of component-based software systems

    Directory of Open Access Journals (Sweden)

    Zeeshan Ali Siddiqui

    2016-01-01

    Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.

  7. A novel ontology approach to support design for reliability considering environmental effects.

    Science.gov (United States)

    Sun, Bo; Li, Yu; Ye, Tianyuan; Ren, Yi

    2015-01-01

    Environmental effects are not considered sufficiently in product design. Reliability problems caused by environmental effects are very prominent. This paper proposes a method to apply ontology approach in product design. During product reliability design and analysis, environmental effects knowledge reusing is achieved. First, the relationship of environmental effects and product reliability is analyzed. Then environmental effects ontology to describe environmental effects domain knowledge is designed. Related concepts of environmental effects are formally defined by using the ontology approach. This model can be applied to arrange environmental effects knowledge in different environments. Finally, rubber seals used in the subhumid acid rain environment are taken as an example to illustrate ontological model application on reliability design and analysis.

  8. A study of software reliability growth from the perspective of learning effects

    International Nuclear Information System (INIS)

    Chiu, K.-C.; Huang, Y.-S.; Lee, T.-Z.

    2008-01-01

    For the last three decades, reliability growth has been studied to predict software reliability in the testing/debugging phase. Most of the models developed were based on the non-homogeneous Poisson process (NHPP), and S-shaped type or exponential-shaped type of behavior is usually assumed. Unfortunately, such models may be suitable only for particular software failure data, thus narrowing the scope of applications. Therefore, from the perspective of learning effects that can influence the process of software reliability growth, we considered that efficiency in testing/debugging concerned not only the ability of the testing staff but also the learning effect that comes from inspecting the testing/debugging codes. The proposed approach can reasonably describe the S-shaped and exponential-shaped types of behaviors simultaneously, and the results in the experiment show good fit. A comparative analysis to evaluate the effectiveness for the proposed model and other software failure models was also performed. Finally, an optimal software release policy is suggested

  9. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    International Nuclear Information System (INIS)

    Dong, Feifei; Liu, Yong; Su, Han; Zou, Rui; Guo, Huaicheng

    2015-01-01

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  10. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Feifei [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Liu, Yong, E-mail: yongliu@pku.edu.cn [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Institute of Water Sciences, Peking University, Beijing 100871 (China); Su, Han [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Zou, Rui [Tetra Tech, Inc., 10306 Eaton Place, Ste 340, Fairfax, VA 22030 (United States); Yunnan Key Laboratory of Pollution Process and Management of Plateau Lake-Watershed, Kunming 650034 (China); Guo, Huaicheng [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China)

    2015-05-15

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  11. Reliable RANSAC Using a Novel Preprocessing Model

    Directory of Open Access Journals (Sweden)

    Xiaoyan Wang

    2013-01-01

    Full Text Available Geometric assumption and verification with RANSAC has become a crucial step for corresponding to local features due to its wide applications in biomedical feature analysis and vision computing. However, conventional RANSAC is very time-consuming due to redundant sampling times, especially dealing with the case of numerous matching pairs. This paper presents a novel preprocessing model to explore a reduced set with reliable correspondences from initial matching dataset. Both geometric model generation and verification are carried out on this reduced set, which leads to considerable speedups. Afterwards, this paper proposes a reliable RANSAC framework using preprocessing model, which was implemented and verified using Harris and SIFT features, respectively. Compared with traditional RANSAC, experimental results show that our method is more efficient.

  12. Nanoscale deformation measurements for reliability assessment of material interfaces

    Science.gov (United States)

    Keller, Jürgen; Gollhardt, Astrid; Vogel, Dietmar; Michel, Bernd

    2006-03-01

    With the development and application of micro/nano electronic mechanical systems (MEMS, NEMS) for a variety of market segments new reliability issues will arise. The understanding of material interfaces is the key for a successful design for reliability of MEMS/NEMS and sensor systems. Furthermore in the field of BIOMEMS newly developed advanced materials and well known engineering materials are combined despite of fully developed reliability concepts for such devices and components. In addition the increasing interface-to volume ratio in highly integrated systems and nanoparticle filled materials are challenges for experimental reliability evaluation. New strategies for reliability assessment on the submicron scale are essential to fulfil the needs of future devices. In this paper a nanoscale resolution experimental method for the measurement of thermo-mechanical deformation at material interfaces is introduced. The determination of displacement fields is based on scanning probe microscopy (SPM) data. In-situ SPM scans of the analyzed object (i.e. material interface) are carried out at different thermo-mechanical load states. The obtained images are compared by grayscale cross correlation algorithms. This allows the tracking of local image patterns of the analyzed surface structure. The measurement results are full-field displacement fields with nanometer resolution. With the obtained data the mixed mode type of loading at material interfaces can be analyzed with highest resolution for future needs in micro system and nanotechnology.

  13. Reliability-based failure cause assessment of collapsed bridge during construction

    International Nuclear Information System (INIS)

    Choi, Hyun-Ho; Lee, Sang-Yoon; Choi, Il-Yoon; Cho, Hyo-Nam; Mahadevan, Sankaran

    2006-01-01

    Until now, in many forensic reports, the failure cause assessments are usually carried out by a deterministic approach so far. However, it may be possible for the forensic investigation to lead to unreasonable results far from the real collapse scenario, because the deterministic approach does not systematically take into account any information on the uncertainties involved in the failures of structures. Reliability-based failure cause assessment (reliability-based forensic engineering) methodology is developed which can incorporate the uncertainties involved in structural failures and structures, and to apply them to the collapsed bridge in order to identify the most critical failure scenario and find the cause that triggered the bridge collapse. Moreover, to save the time and cost of evaluation, an algorithm of automated event tree analysis (ETA) is proposed and possible to automatically calculate the failure probabilities of the failure events and the occurrence probabilities of failure scenarios. Also, for reliability analysis, uncertainties are estimated more reasonably by using the Bayesian approach based on the experimental laboratory testing data in the forensic report. For the applicability, the proposed approach is applied to the Hang-ju Grand Bridge, which collapsed during construction, and compared with deterministic approach

  14. A reliability simulation language for reliability analysis

    International Nuclear Information System (INIS)

    Deans, N.D.; Miller, A.J.; Mann, D.P.

    1986-01-01

    The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)

  15. A study of operational and testing reliability in software reliability analysis

    International Nuclear Information System (INIS)

    Yang, B.; Xie, M.

    2000-01-01

    Software reliability is an important aspect of any complex equipment today. Software reliability is usually estimated based on reliability models such as nonhomogeneous Poisson process (NHPP) models. Software systems are improving in testing phase, while it normally does not change in operational phase. Depending on whether the reliability is to be predicted for testing phase or operation phase, different measure should be used. In this paper, two different reliability concepts, namely, the operational reliability and the testing reliability, are clarified and studied in detail. These concepts have been mixed up or even misused in some existing literature. Using different reliability concept will lead to different reliability values obtained and it will further lead to different reliability-based decisions made. The difference of the estimated reliabilities is studied and the effect on the optimal release time is investigated

  16. Bayesian approach for the reliability assessment of corroded interdependent pipe networks

    International Nuclear Information System (INIS)

    Ait Mokhtar, El Hassene; Chateauneuf, Alaa; Laggoune, Radouane

    2016-01-01

    Pipelines under corrosion are subject to various environment conditions, and consequently it becomes difficult to build realistic corrosion models. In the present work, a Bayesian methodology is proposed to allow for updating the corrosion model parameters according to the evolution of environmental conditions. For reliability assessment of dependent structures, Bayesian networks are used to provide interesting qualitative and quantitative description of the information in the system. The qualitative contribution lies in the modeling of complex system, composed by dependent pipelines, as a Bayesian network. The quantitative one lies in the evaluation of the dependencies between pipelines by the use of a new method for the generation of conditional probability tables. The effectiveness of Bayesian updating is illustrated through an application where the new reliability of degraded (corroded) pipe networks is assessed. - Highlights: • A methodology for Bayesian network modeling of pipe networks is proposed. • Bayesian approach based on Metropolis - Hastings algorithm is conducted for corrosion model updating. • The reliability of corroded pipe network is assessed by considering the interdependencies between the pipelines.

  17. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  18. Efficiency evaluation of an electronic equipment: availability,reliability and maintenance

    International Nuclear Information System (INIS)

    Guyot, C.

    1966-01-01

    This concept of efficiency often called ''system effectiveness'', is presented and analyzed in terms of reliability and maintenance. It allows to define the availability factor of an electronic equipment. A procedure of evaluation is proposed. (A.L.B.)

  19. Optimization of patterns of control bars using neural networks

    International Nuclear Information System (INIS)

    Mejia S, D.M.; Ortiz S, J.J.

    2005-01-01

    In this work the RENOPBC system that is based on a recurrent multi state neural network, for the optimization of patterns of control bars in a cycle of balance of a boiling water reactor (BWR for their initials in English) is presented. The design of patterns of bars is based on the execution of operation thermal limits, to maintain criticizes the reactor and that the axial profile of power is adjusted to one predetermined along several steps of burnt. The patterns of control bars proposed by the system are comparable to those proposed by human experts with many hour-man of experience. These results are compared with those proposed by other techniques as genetic algorithms, colonies of ants and tabu search for the same operation cycle. As consequence it is appreciated that the proposed patterns of control bars, have bigger operation easiness that those proposed by the other techniques. (Author)

  20. Tactile acuity charts: a reliable measure of spatial acuity.

    Science.gov (United States)

    Bruns, Patrick; Camargo, Carlos J; Campanella, Humberto; Esteve, Jaume; Dinse, Hubert R; Röder, Brigitte

    2014-01-01

    For assessing tactile spatial resolution it has recently been recommended to use tactile acuity charts which follow the design principles of the Snellen letter charts for visual acuity and involve active touch. However, it is currently unknown whether acuity thresholds obtained with this newly developed psychophysical procedure are in accordance with established measures of tactile acuity that involve passive contact with fixed duration and control of contact force. Here we directly compared tactile acuity thresholds obtained with the acuity charts to traditional two-point and grating orientation thresholds in a group of young healthy adults. For this purpose, two types of charts, using either Braille-like dot patterns or embossed Landolt rings with different orientations, were adapted from previous studies. Measurements with the two types of charts were equivalent, but generally more reliable with the dot pattern chart. A comparison with the two-point and grating orientation task data showed that the test-retest reliability of the acuity chart measurements after one week was superior to that of the passive methods. Individual thresholds obtained with the acuity charts agreed reasonably with the grating orientation threshold, but less so with the two-point threshold that yielded relatively distinct acuity estimates compared to the other methods. This potentially considerable amount of mismatch between different measures of tactile acuity suggests that tactile spatial resolution is a complex entity that should ideally be measured with different methods in parallel. The simple test procedure and high reliability of the acuity charts makes them a promising complement and alternative to the traditional two-point and grating orientation thresholds.

  1. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    Science.gov (United States)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  2. Reliability Analysis of Load-Sharing K-out-of-N System Considering Component Degradation

    Directory of Open Access Journals (Sweden)

    Chunbo Yang

    2015-01-01

    Full Text Available The K-out-of-N configuration is a typical form of redundancy techniques to improve system reliability, where at least K-out-of-N components must work for successful operation of system. When the components are degraded, more components are needed to meet the system requirement, which means that the value of K has to increase. The current reliability analysis methods overestimate the reliability, because using constant K ignores the degradation effect. In a load-sharing system with degrading components, the workload shared on each surviving component will increase after a random component failure, resulting in higher failure rate and increased performance degradation rate. This paper proposes a method combining a tampered failure rate model with a performance degradation model to analyze the reliability of load-sharing K-out-of-N system with degrading components. The proposed method considers the value of K as a variable which is derived by the performance degradation model. Also, the load-sharing effect is evaluated by the tampered failure rate model. Monte-Carlo simulation procedure is used to estimate the discrete probability distribution of K. The case of a solar panel is studied in this paper, and the result shows that the reliability considering component degradation is less than that ignoring component degradation.

  3. Reliability analysis and assessment of structural systems

    International Nuclear Information System (INIS)

    Yao, J.T.P.; Anderson, C.A.

    1977-01-01

    The study of structural reliability deals with the probability of having satisfactory performance of the structure under consideration within any specific time period. To pursue this study, it is necessary to apply available knowledge and methodology in structural analysis (including dynamics) and design, behavior of materials and structures, experimental mechanics, and the theory of probability and statistics. In addition, various severe loading phenomena such as strong motion earthquakes and wind storms are important considerations. For three decades now, much work has been done on reliability analysis of structures, and during this past decade, certain so-called 'Level I' reliability-based design codes have been proposed and are in various stages of implementation. These contributions will be critically reviewed and summarized in this paper. Because of the undesirable consequences resulting from the failure of nuclear structures, it is important and desirable to consider the structural reliability in the analysis and design of these structures. Moreover, after these nuclear structures are constructed, it is desirable for engineers to be able to assess the structural reliability periodically as well as immediately following the occurrence of severe loading conditions such as a strong-motion earthquake. During this past decade, increasing use has been made of techniques of system identification in structural engineering. On the basis of non-destructive test results, various methods have been developed to obtain an adequate mathematical model (such as the equations of motion with more realistic parameters) to represent the structural system

  4. A Novel Reliability Enhanced Handoff Method in Future Wireless Heterogeneous Networks

    Directory of Open Access Journals (Sweden)

    Wang YuPeng

    2016-01-01

    Full Text Available As the demand increases, future networks will follow the trends of network variety and service flexibility, which requires heterogeneous type of network deployment and reliable communication method. In practice, most communication failure happens due to the bad radio link quality, i.e., high-speed users suffers a lot on the problem of radio link failure, which causes the problem of communication interrupt and radio link recovery. To make the communication more reliable, especially for the high mobility users, we propose a novel communication handoff mechanism to reduce the occurrence of service interrupt. Based on computer simulation, we find that the reliability on the service is greatly improved.

  5. Approach to assurance of reliability of linear accelerator operation observations

    International Nuclear Information System (INIS)

    Bakov, S.M.; Borovikov, A.A.; Kavkun, S.L.

    1994-01-01

    The system approach to solving the task of assuring reliability of observations over the linear accelerator operation is proposed. The basic principles of this method consist in application of dependences between the facility parameters, decrease in the number of the system apparatus channels for data acquisition without replacement of failed channel by reserve one. The signal commutation unit, the introduction whereof into the data acquisition system essentially increases the reliability of the measurement system on the account of active reserve, is considered detail. 8 refs. 6 figs

  6. Integrated Reliability Estimation of a Nuclear Maintenance Robot including a Software

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Heung Seop; Kim, Jae Hee; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    Conventional reliability estimation techniques such as Fault Tree Analysis (FTA), Reliability Block Diagram (RBD), Markov Model, and Event Tree Analysis (ETA) have been used widely and approved in some industries. Then there are some limitations when we use them for a complicate robot systems including software such as intelligent reactor inspection robots. Therefore an expert's judgment plays an important role in estimating the reliability of a complicate system in practice, because experts can deal with diverse evidence related to the reliability and then perform an inference based on them. The proposed method in this paper combines qualitative and quantitative evidences and performs an inference like experts. Furthermore, it does the work in a formal and in a quantitative way unlike human experts, by the benefits of Bayesian Nets (BNs)

  7. A comparative reliability analysis of ETCS train radio communications

    NARCIS (Netherlands)

    Hermanns, H.; Becker, B.; Jansen, D.N.; Damm, W.; Usenko, Y.S.; Fränzle, M.; Olderog, E.-R.; Podelski, A.; Wilhelm, R.

    StoCharts have been proposed as a UML statechart extension for performance and dependability evaluation, and were applied in the context of train radio reliability assessment to show the principal tractability of realistic cases with this approach. In this paper, we extend on this bare feasibility

  8. Modeling, implementation, and validation of arterial travel time reliability : [summary].

    Science.gov (United States)

    2013-11-01

    Travel time reliability (TTR) has been proposed as : a better measure of a facilitys performance than : a statistical measure like peak hour demand. TTR : is based on more information about average traffic : flows and longer time periods, thus inc...

  9. Valuing long-haul and metropolitan freight travel time and reliability

    Science.gov (United States)

    2000-12-01

    Most evaluations and economic assessments of transportation proposal and policies in Australia omit a valuation of time spent in transit for individual items or loads of freight. Knowledge of delays and the practical value of reliability can be usefu...

  10. A fuzzy-based reliability approach to evaluate basic events of fault tree analysis for nuclear power plant probabilistic safety assessment

    International Nuclear Information System (INIS)

    Purba, Julwan Hendry

    2014-01-01

    Highlights: • We propose a fuzzy-based reliability approach to evaluate basic event reliabilities. • It implements the concepts of failure possibilities and fuzzy sets. • Experts evaluate basic event failure possibilities using qualitative words. • Triangular fuzzy numbers mathematically represent qualitative failure possibilities. • It is a very good alternative for conventional reliability approach. - Abstract: Fault tree analysis has been widely utilized as a tool for nuclear power plant probabilistic safety assessment. This analysis can be completed only if all basic events of the system fault tree have their quantitative failure rates or failure probabilities. However, it is difficult to obtain those failure data due to insufficient data, environment changing or new components. This study proposes a fuzzy-based reliability approach to evaluate basic events of system fault trees whose failure precise probability distributions of their lifetime to failures are not available. It applies the concept of failure possibilities to qualitatively evaluate basic events and the concept of fuzzy sets to quantitatively represent the corresponding failure possibilities. To demonstrate the feasibility and the effectiveness of the proposed approach, the actual basic event failure probabilities collected from the operational experiences of the David–Besse design of the Babcock and Wilcox reactor protection system fault tree are used to benchmark the failure probabilities generated by the proposed approach. The results confirm that the proposed fuzzy-based reliability approach arises as a suitable alternative for the conventional probabilistic reliability approach when basic events do not have the corresponding quantitative historical failure data for determining their reliability characteristics. Hence, it overcomes the limitation of the conventional fault tree analysis for nuclear power plant probabilistic safety assessment

  11. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  12. A Hybrid One-Way ANOVA Approach for the Robust and Efficient Estimation of Differential Gene Expression with Multiple Patterns.

    Directory of Open Access Journals (Sweden)

    Mohammad Manir Hossain Mollah

    -sample cases in the presence of more than 50% outlying genes. The proposed method also exhibited better performance than the other methods for m > 2 conditions with multiple patterns of expression, where the BetaEB was not extended for this condition. Therefore, the proposed approach would be more suitable and reliable on average for the identification of DE genes between two or more conditions with multiple patterns of expression.

  13. Decision Diagram Based Symbolic Algorithm for Evaluating the Reliability of a Multistate Flow Network

    Directory of Open Access Journals (Sweden)

    Rongsheng Dong

    2016-01-01

    Full Text Available Evaluating the reliability of Multistate Flow Network (MFN is an NP-hard problem. Ordered binary decision diagram (OBDD or variants thereof, such as multivalued decision diagram (MDD, are compact and efficient data structures suitable for dealing with large-scale problems. Two symbolic algorithms for evaluating the reliability of MFN, MFN_OBDD and MFN_MDD, are proposed in this paper. In the algorithms, several operating functions are defined to prune the generated decision diagrams. Thereby the state space of capacity combinations is further compressed and the operational complexity of the decision diagrams is further reduced. Meanwhile, the related theoretical proofs and complexity analysis are carried out. Experimental results show the following: (1 compared to the existing decomposition algorithm, the proposed algorithms take less memory space and fewer loops. (2 The number of nodes and the number of variables of MDD generated in MFN_MDD algorithm are much smaller than those of OBDD built in the MFN_OBDD algorithm. (3 In two cases with the same number of arcs, the proposed algorithms are more suitable for calculating the reliability of sparse networks.

  14. Integrating generation and transmission networks reliability for unit commitment solution

    International Nuclear Information System (INIS)

    Jalilzadeh, S.; Shayeghi, H.; Hadadian, H.

    2009-01-01

    This paper presents a new method with integration of generation and transmission networks reliability for the solution of unit commitment (UC) problem. In fact, in order to have a more accurate assessment of system reserve requirement, in addition to unavailability of generation units, unavailability of transmission lines are also taken into account. In this way, evaluation of the required spinning reserve (SR) capacity is performed by applying reliability constraints based on loss of load probability and expected energy not supplied (EENS) indices. Calculation of the above parameters is accomplished by employing a novel procedure based on the linear programming which it also minimizes them to achieve optimum level of the SR capacity and consequently a cost-benefit reliability constrained UC schedule. In addition, a powerful solution technique called 'integer-coded genetic algorithm (ICGA)' is being used for the solution of the proposed method. Numerical results on the IEEE reliability test system show that the consideration of transmission network unavailability has an important influence on reliability indices of the UC schedules

  15. Aviation Fuel System Reliability and Fail-Safety Analysis. Promising Alternative Ways for Improving the Fuel System Reliability

    Directory of Open Access Journals (Sweden)

    I. S. Shumilov

    2017-01-01

    Full Text Available The paper deals with design requirements for an aviation fuel system (AFS, AFS basic design requirements, reliability, and design precautions to avoid AFS failure. Compares the reliability and fail-safety of AFS and aircraft hydraulic system (AHS, considers the promising alternative ways to raise reliability of fuel systems, as well as elaborates recommendations to improve reliability of the pipeline system components and pipeline systems, in general, based on the selection of design solutions.It is extremely advisable to design the AFS and AHS in accordance with Aviation Regulations АП25 and Accident Prevention Guidelines, ICAO (International Civil Aviation Association, which will reduce risk of emergency situations, and in some cases even avoid heavy disasters.ATS and AHS designs should be based on the uniform principles to ensure the highest reliability and safety. However, currently, this principle is not enough kept, and AFS looses in reliability and fail-safety as compared with AHS. When there are the examined failures (single and their combinations the guidelines to ensure the AFS efficiency should be the same as those of norm-adopted in the Regulations АП25 for AHS. This will significantly increase reliability and fail-safety of the fuel systems and aircraft flights, in general, despite a slight increase in AFS mass.The proposed improvements through the use of components redundancy of the fuel system will greatly raise reliability of the fuel system of a passenger aircraft, which will, without serious consequences for the flight, withstand up to 2 failures, its reliability and fail-safety design will be similar to those of the AHS, however, above improvement measures will lead to a slightly increasing total mass of the fuel system.It is advisable to set a second pump on the engine in parallel with the first one. It will run in case the first one fails for some reasons. The second pump, like the first pump, can be driven from the

  16. Reliability analysis based on a novel density estimation method for structures with correlations

    Directory of Open Access Journals (Sweden)

    Baoyu LI

    2017-06-01

    Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.

  17. Inter- and intra-observer reliability of masking in plantar pressure measurement analysis.

    Science.gov (United States)

    Deschamps, K; Birch, I; Mc Innes, J; Desloovere, K; Matricali, G A

    2009-10-01

    Plantar pressure measurement is an important tool in gait analysis. Manual placement of small masks (masking) is increasingly used to calculate plantar pressure characteristics. Little is known concerning the reliability of manual masking. The aim of this study was to determine the reliability of masking on 2D plantar pressure footprints, in a population with forefoot deformity (i.e. hallux valgus). Using a random repeated-measure design, four observers identified the third metatarsal head on a peak-pressure barefoot footprint, using a small mask. Subsequently, the location of all five metatarsal heads was identified, using the same size of masks and the same protocol. The 2D positional variation of the masks and the peak pressure (PP) and pressure time integral (PTI) values of each mask were calculated. For single-masking the lowest inter-observer reliability was found for the distal-proximal direction, causing a clear, adverse impact on the reliability of the pressure characteristics (PP and PTI). In the medial-lateral direction the inter-observer reliability could be scored as high. Intra-observer reliability was better and could be scored as high or good for both directions, with a correlated improved reliability of the pressure characteristics. Reliability of multi-masking showed a similar pattern, but overall values tended to be lower. Therefore, small sized masking in order to define pressure characteristics in the forefoot should be done with care.

  18. Reliability in the utility computing era: Towards reliable Fog computing

    DEFF Research Database (Denmark)

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.

    2013-01-01

    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  19. Ensuring Reliability of Reinforced Concrete Structures of “Northern Installation”

    Science.gov (United States)

    Pinus, B. I.; Pinus, Zh N.

    2017-11-01

    One of the directions in active management of the building structures design reliability is the correction of functional models of the criterial efficiency based on the specific conditions of their operation. With respect to the reinforced concrete structures of the “northern installation”, such an approach predetermines the necessity to consider kinetics and statistical patterns of wear under systemic and cyclical impacts of low negative temperatures and humidity. Concurrently, the task of project assurance of equitability with usual structures of the “northern installation” for the duration of the expected service life is considered. In the first approximation, this task is proposed to be solved by correcting significant parameters of the concrete’s constructive properties in correlation with the kinetics of exhaustion of the concrete’s projected (controlled) frost resistance. This work presents the results of statistically representative generalization of the change in strength, modulus of elasticity and ultimate deformations in cryogenic and thawed states under cyclic impact of temperatures up to minus 40°C. Based on these results, the dynamic models of dependency of concrete’s standardized characteristics on its frost’s resistance exhaustion level have been developed.

  20. A method of predicting the reliability of CDM coil insulation

    International Nuclear Information System (INIS)

    Kytasty, A.; Ogle, C.; Arrendale, H.

    1992-01-01

    This paper presents a method of predicting the reliability of the Collider Dipole Magnet (CDM) coil insulation design. The method proposes a probabilistic treatment of electrical test data, stress analysis, material properties variability and loading uncertainties to give the reliability estimate. The approach taken to predict reliability of design related failure modes of the CDM is to form analytical models of the various possible failure modes and their related mechanisms or causes, and then statistically assess the contributions of the various contributing variables. The probability of the failure mode occurring is interpreted as the number of times one would expect certain extreme situations to combine and randomly occur. One of the more complex failure modes of the CDM will be used to illustrate this methodology